title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
3.3. Automated Installation | 3.3. Automated Installation If you need to install a Red Hat JBoss product multiple times with the same configuration, you can save time by using an installation script. By using an installation script with predefined settings, you can perform the entire installation by running a single command, instead of working through the installation step by step each time. You can generate an installation script by running the installer (in graphical or text mode), stepping through with your desired configuration, and then choosing to generate the script when prompted towards the end of the process. Prerequisites You must have downloaded the relevant installer JAR file from https://access.redhat.com/jbossnetwork/ . You must have generated the script and saved it as an XML file during a installation. Procedure 3.3. Installing with a Script Note You can also provide variables using the CLI or automatic installation. Use java -jar path/to/installer.jar -variablefile /pathtofile to supply variables using a configuration file or use java -jar path/to/installer.jar -variables EXAMPLE1=example1,EXAMPLE2=example2 to provide a comma-separated list of variables. | [
"java -jar jboss- PRODUCT -installer- VERSION .jar SCRIPT .xml"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/automated_installation |
4.2. Physical Volume Administration | 4.2. Physical Volume Administration This section describes the commands that perform the various aspects of physical volume administration. 4.2.1. Creating Physical Volumes The following subsections describe the commands used for creating physical volumes. 4.2.1.1. Setting the Partition Type If you are using a whole disk device for your physical volume, the disk must have no partition table. For DOS disk partitions, the partition id should be set to 0x8e using the fdisk or cfdisk command or an equivalent. For whole disk devices only the partition table must be erased, which will effectively destroy all data on that disk. You can remove an existing partition table by zeroing the first sector with the following command: 4.2.1.2. Initializing Physical Volumes Use the pvcreate command to initialize a block device to be used as a physical volume. Initialization is analogous to formatting a file system. The following command initializes /dev/sdd , /dev/sde , and /dev/sdf as LVM physical volumes for later use as part of LVM logical volumes. To initialize partitions rather than whole disks: run the pvcreate command on the partition. The following example initializes the partition /dev/hdb1 as an LVM physical volume for later use as part of an LVM logical volume. 4.2.1.3. Scanning for Block Devices You can scan for block devices that may be used as physical volumes with the lvmdiskscan command, as shown in the following example. 4.2.2. Displaying Physical Volumes There are three commands you can use to display properties of LVM physical volumes: pvs , pvdisplay , and pvscan . The pvs command provides physical volume information in a configurable form, displaying one line per physical volume. The pvs command provides a great deal of format control, and is useful for scripting. For information on using the pvs command to customize your output, see Section 4.8, "Customized Reporting for LVM" . The pvdisplay command provides a verbose multi-line output for each physical volume. It displays physical properties (size, extents, volume group, and so on) in a fixed format. The following example shows the output of the pvdisplay command for a single physical volume. The pvscan command scans all supported LVM block devices in the system for physical volumes. The following command shows all physical devices found: You can define a filter in the lvm.conf file so that this command will avoid scanning specific physical volumes. For information on using filters to control which devices are scanned, see Section 4.5, "Controlling LVM Device Scans with Filters" . 4.2.3. Preventing Allocation on a Physical Volume You can prevent allocation of physical extents on the free space of one or more physical volumes with the pvchange command. This may be necessary if there are disk errors, or if you will be removing the physical volume. The following command disallows the allocation of physical extents on /dev/sdk1 . You can also use the -xy arguments of the pvchange command to allow allocation where it had previously been disallowed. 4.2.4. Resizing a Physical Volume If you need to change the size of an underlying block device for any reason, use the pvresize command to update LVM with the new size. You can execute this command while LVM is using the physical volume. 4.2.5. Removing Physical Volumes If a device is no longer required for use by LVM, you can remove the LVM label with the pvremove command. Executing the pvremove command zeroes the LVM metadata on an empty physical volume. If the physical volume you want to remove is currently part of a volume group, you must remove it from the volume group with the vgreduce command, as described in Section 4.3.7, "Removing Physical Volumes from a Volume Group" . | [
"dd if=/dev/zero of= PhysicalVolume bs=512 count=1",
"pvcreate /dev/sdd /dev/sde /dev/sdf",
"pvcreate /dev/hdb1",
"lvmdiskscan /dev/ram0 [ 16.00 MB] /dev/sda [ 17.15 GB] /dev/root [ 13.69 GB] /dev/ram [ 16.00 MB] /dev/sda1 [ 17.14 GB] LVM physical volume /dev/VolGroup00/LogVol01 [ 512.00 MB] /dev/ram2 [ 16.00 MB] /dev/new_vg/lvol0 [ 52.00 MB] /dev/ram3 [ 16.00 MB] /dev/pkl_new_vg/sparkie_lv [ 7.14 GB] /dev/ram4 [ 16.00 MB] /dev/ram5 [ 16.00 MB] /dev/ram6 [ 16.00 MB] /dev/ram7 [ 16.00 MB] /dev/ram8 [ 16.00 MB] /dev/ram9 [ 16.00 MB] /dev/ram10 [ 16.00 MB] /dev/ram11 [ 16.00 MB] /dev/ram12 [ 16.00 MB] /dev/ram13 [ 16.00 MB] /dev/ram14 [ 16.00 MB] /dev/ram15 [ 16.00 MB] /dev/sdb [ 17.15 GB] /dev/sdb1 [ 17.14 GB] LVM physical volume /dev/sdc [ 17.15 GB] /dev/sdc1 [ 17.14 GB] LVM physical volume /dev/sdd [ 17.15 GB] /dev/sdd1 [ 17.14 GB] LVM physical volume 7 disks 17 partitions 0 LVM physical volume whole disks 4 LVM physical volumes",
"pvdisplay --- Physical volume --- PV Name /dev/sdc1 VG Name new_vg PV Size 17.14 GB / not usable 3.40 MB Allocatable yes PE Size (KByte) 4096 Total PE 4388 Free PE 4375 Allocated PE 13 PV UUID Joqlch-yWSj-kuEn-IdwM-01S9-XO8M-mcpsVe",
"pvscan PV /dev/sdb2 VG vg0 lvm2 [964.00 MB / 0 free] PV /dev/sdc1 VG vg0 lvm2 [964.00 MB / 428.00 MB free] PV /dev/sdc2 lvm2 [964.84 MB] Total: 3 [2.83 GB] / in use: 2 [1.88 GB] / in no VG: 1 [964.84 MB]",
"pvchange -x n /dev/sdk1",
"pvremove /dev/ram15 Labels on physical volume \"/dev/ram15\" successfully wiped"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/physvol_admin |
Chapter 9. Disabling the web console in OpenShift Container Platform | Chapter 9. Disabling the web console in OpenShift Container Platform You can disable the OpenShift Container Platform web console. 9.1. Prerequisites Deploy an OpenShift Container Platform cluster. 9.2. Disabling the web console You can disable the web console by editing the consoles.operator.openshift.io resource. Edit the consoles.operator.openshift.io resource: USD oc edit consoles.operator.openshift.io cluster The following example displays the parameters from this resource that you can modify: apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: managementState: Removed 1 1 Set the managementState parameter value to Removed to disable the web console. The other valid values for this parameter are Managed , which enables the console under the cluster's control, and Unmanaged , which means that you are taking control of web console management. | [
"oc edit consoles.operator.openshift.io cluster",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: managementState: Removed 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/web_console/disabling-web-console |
8.35. emacs | 8.35. emacs 8.35.1. RHBA-2013:1088 - emacs bug fix update Updated emacs packages that fix a bug are now available for Red Hat Enterprise Linux 6. GNU Emacs is a powerful, customizable, self-documenting text editor. It provides special code editing features, a scripting language (elisp), and the capability to read email and news. Bug Fix BZ#678225 The Lucida Typewriter and Lucida Console fonts were not usable with Emacs 23.1 in Red Hat Enterprise Linux 6. Consequently, the following error message was displayed in the Messages buffer: "set-face-attribute: Font not available". With this update, no error message is displayed in this scenario and the selected font can be used to display the buffer contents. Users of emacs are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/emacs |
Chapter 7. SubjectRulesReview [authorization.openshift.io/v1] | Chapter 7. SubjectRulesReview [authorization.openshift.io/v1] Description SubjectRulesReview is a resource you can create to determine which actions another user can perform in a namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds spec object SubjectRulesReviewSpec adds information about how to conduct the check status object SubjectRulesReviewStatus is contains the result of a rules check 7.1.1. .spec Description SubjectRulesReviewSpec adds information about how to conduct the check Type object Required user groups scopes Property Type Description groups array (string) Groups is optional. Groups is the list of groups to which the User belongs. At least one of User and Groups must be specified. scopes array (string) Scopes to use for the evaluation. Empty means "use the unscoped (full) permissions of the user/groups". user string User is optional. At least one of User and Groups must be specified. 7.1.2. .status Description SubjectRulesReviewStatus is contains the result of a rules check Type object Required rules Property Type Description evaluationError string EvaluationError can appear in combination with Rules. It means some error happened during evaluation that may have prevented additional rules from being populated. rules array Rules is the list of rules (no particular sort) that are allowed for the subject rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 7.1.3. .status.rules Description Rules is the list of rules (no particular sort) that are allowed for the subject Type array 7.1.4. .status.rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs resources Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If this field is empty, then both kubernetes and origin API groups are assumed. That means that if an action is requested against one of the enumerated resources in either the kubernetes or the origin API group, the request will be allowed attributeRestrictions RawExtension AttributeRestrictions will vary depending on what the Authorizer/AuthorizationAttributeBuilder pair supports. If the Authorizer does not recognize how to handle the AttributeRestrictions, the Authorizer should report an error. nonResourceURLs array (string) NonResourceURLsSlice is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path This name is intentionally different than the internal type so that the DefaultConvert works nicely and because the ordering may be different. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. ResourceAll represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds and AttributeRestrictions contained in this rule. VerbAll represents all kinds. 7.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/namespaces/{namespace}/subjectrulesreviews POST : create a SubjectRulesReview 7.2.1. /apis/authorization.openshift.io/v1/namespaces/{namespace}/subjectrulesreviews Table 7.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a SubjectRulesReview Table 7.2. Body parameters Parameter Type Description body SubjectRulesReview schema Table 7.3. HTTP responses HTTP code Reponse body 200 - OK SubjectRulesReview schema 201 - Created SubjectRulesReview schema 202 - Accepted SubjectRulesReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authorization_apis/subjectrulesreview-authorization-openshift-io-v1 |
Chapter 5. Device Drivers | Chapter 5. Device Drivers This chapter provides a comprehensive listing of all device drivers that are new or have been updated in Red Hat Enterprise Linux 7. 5.1. New Drivers Graphics Drivers and Miscellaneous Drivers halt poll cpuidle driver (cpuidle-haltpoll.ko.xz) Intel(R) Trace Hub controller driver (intel_th.ko.xz) Intel(R) Trace Hub ACPI controller driver (intel_th_acpi.ko.xz) Intel(R) Trace Hub Global Trace Hub driver (intel_th_gth.ko.xz) Intel(R) Trace Hub Memory Storage Unit driver (intel_th_msu.ko.xz) Intel(R) Trace Hub PCI controller driver (intel_th_pci.ko.xz) Intel(R) Trace Hub PTI/LPP output driver (intel_th_pti.ko.xz) Intel(R) Trace Hub Software Trace Hub driver (intel_th_sth.ko.xz) dummy_stm device (dummy_stm.ko.xz) stm_console driver (stm_console.ko.xz) System Trace Module device class (stm_core.ko.xz) stm_ftrace driver (stm_ftrace.ko.xz) stm_heartbeat driver (stm_heartbeat.ko.xz) Basic STM framing protocol driver (stm_p_basic.ko.xz) MIPI SyS-T STM framing protocol driver (stm_p_sys-t.ko.xz) Network Drivers gVNIC Driver (gve.ko.xz): 1.0.0. Failover driver for Paravirtual drivers (net_failover.ko.xz) 5.2. Updated Drivers Network Driver Updates Emulex OneConnect NIC Driver (be2net.ko.xz) has been updated to version 12.0.0.0r. Intel(R) Ethernet Connection XL710 Network Driver (i40e.ko.xz) has been updated to version 2.8.20-k. The Netronome Flow Processor (NFP) driver (nfp.ko.xz) has been updated to version 3.10.0-1122.el7.x86_64. Storage Driver Updates QLogic FCoE Driver (bnx2fc.ko.xz) has been updated to version 2.12.10. Driver for HP Smart Array Controller version (hpsa.ko.xz) has been updated to version 3.4.20-170-RH4. Emulex LightPulse Fibre Channel SCSI driver (lpfc.ko.xz) has been updated to version 0:12.0.0.13. Broadcom MegaRAID SAS Driver (megaraid_sas.ko.xz) has been updated to version 07.710.50.00-rh1. LSI MPT Fusion SAS 3.0 Device Driver (mpt3sas.ko.xz) has been updated to version 31.100.01.00. QLogic QEDF 25/40/50/100Gb FCoE Driver (qedf.ko.xz) has been updated to version 8.37.25.20. QLogic FastLinQ 4xxxx iSCSI Module (qedi.ko.xz) has been updated to version 8.37.0.20. QLogic Fibre Channel HBA Driver (qla2xxx.ko.xz) has been updated to version 10.01.00.20.07.8-k. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.8_release_notes/device_drivers |
Chapter 99. MLLP | Chapter 99. MLLP Both producer and consumer are supported The MLLP component is specifically designed to handle the nuances of the MLLP protocol and provide the functionality required by Healthcare providers to communicate with other systems using the MLLP protocol. The MLLP component provides a simple configuration URI, automated HL7 acknowledgment generation and automatic acknowledgment interrogation. The MLLP protocol does not typically use a large number of concurrent TCP connections - a single active TCP connection is the normal case. Therefore, the MLLP component uses a simple thread-per-connection model based on standard Java Sockets. This keeps the implementation simple and eliminates the dependencies on only Camel itself. The component supports the following: A Camel consumer using a TCP Server A Camel producer using a TCP Client The MLLP component use byte[] payloads, and relies on Camel type conversion to convert byte[] to other types. 99.1. Dependencies When using mllp with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mllp-starter</artifactId> </dependency> 99.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 99.2.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 99.2.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 99.3. Component Options The MLLP component supports 30 options, which are listed below. Name Description Default Type autoAck (common) Enable/Disable the automatic generation of a MLLP Acknowledgement MLLP Consumers only. true boolean charsetName (common) Sets the default charset to use. String configuration (common) Sets the default configuration to use when creating MLLP endpoints. MllpConfiguration hl7Headers (common) Enable/Disable the automatic generation of message headers from the HL7 Message MLLP Consumers only. true boolean requireEndOfData (common) Enable/Disable strict compliance to the MLLP standard. The MLLP standard specifies START_OF_BLOCKhl7 payloadEND_OF_BLOCKEND_OF_DATA, however, some systems do not send the final END_OF_DATA byte. This setting controls whether or not the final END_OF_DATA byte is required or optional. true boolean stringPayload (common) Enable/Disable converting the payload to a String. If enabled, HL7 Payloads received from external systems will be validated converted to a String. If the charsetName property is set, that character set will be used for the conversion. If the charsetName property is not set, the value of MSH-18 will be used to determine th appropriate character set. If MSH-18 is not set, then the default ISO-8859-1 character set will be use. true boolean validatePayload (common) Enable/Disable the validation of HL7 Payloads If enabled, HL7 Payloads received from external systems will be validated (see Hl7Util.generateInvalidPayloadExceptionMessage for details on the validation). If and invalid payload is detected, a MllpInvalidMessageException (for consumers) or a MllpInvalidAcknowledgementException will be thrown. false boolean acceptTimeout (consumer) Timeout (in milliseconds) while waiting for a TCP connection TCP Server Only. 60000 int backlog (consumer) The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused. 5 Integer bindRetryInterval (consumer) TCP Server Only - The number of milliseconds to wait between bind attempts. 5000 int bindTimeout (consumer) TCP Server Only - The number of milliseconds to retry binding to a server port. 30000 int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to receive incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. If disabled, the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions by logging them at WARN or ERROR level and ignored. true boolean lenientBind (consumer) TCP Server Only - Allow the endpoint to start before the TCP ServerSocket is bound. In some environments, it may be desirable to allow the endpoint to start before the TCP ServerSocket is bound. false boolean maxConcurrentConsumers (consumer) The maximum number of concurrent MLLP Consumer connections that will be allowed. If a new connection is received and the maximum is number are already established, the new connection will be reset immediately. 5 int reuseAddress (consumer) Enable/disable the SO_REUSEADDR socket option. false Boolean exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut InOut ExchangePattern connectTimeout (producer) Timeout (in milliseconds) for establishing for a TCP connection TCP Client only. 30000 int idleTimeoutStrategy (producer) decide what action to take when idle timeout occurs. Possible values are : RESET: set SO_LINGER to 0 and reset the socket CLOSE: close the socket gracefully default is RESET. Enum values: RESET CLOSE RESET MllpIdleTimeoutStrategy keepAlive (producer) Enable/disable the SO_KEEPALIVE socket option. true Boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean tcpNoDelay (producer) Enable/disable the TCP_NODELAY socket option. true Boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean defaultCharset (advanced) Set the default character set to use for byte to/from String conversions. ISO-8859-1 String logPhi (advanced) Whether to log PHI. true Boolean logPhiMaxBytes (advanced) Set the maximum number of bytes of PHI that will be logged in a log entry. 5120 Integer readTimeout (advanced) The SO_TIMEOUT value (in milliseconds) used after the start of an MLLP frame has been received. 5000 int receiveBufferSize (advanced) Sets the SO_RCVBUF option to the specified value (in bytes). 8192 Integer receiveTimeout (advanced) The SO_TIMEOUT value (in milliseconds) used when waiting for the start of an MLLP frame. 15000 int sendBufferSize (advanced) Sets the SO_SNDBUF option to the specified value (in bytes). 8192 Integer idleTimeout (tcp) The approximate idle time allowed before the Client TCP Connection will be reset. A null value or a value less than or equal to zero will disable the idle timeout. Integer 99.4. Endpoint Options The MLLP endpoint is configured using URI syntax: with the following path and query parameters: 99.4.1. Path Parameters (2 parameters) Name Description Default Type hostname (common) Required Hostname or IP for connection for the TCP connection. The default value is null, which means any local IP address. String port (common) Required Port number for the TCP connection. int 99.4.2. Query Parameters (26 parameters) Name Description Default Type autoAck (common) Enable/Disable the automatic generation of a MLLP Acknowledgement MLLP Consumers only. true boolean charsetName (common) Sets the default charset to use. String hl7Headers (common) Enable/Disable the automatic generation of message headers from the HL7 Message MLLP Consumers only. true boolean requireEndOfData (common) Enable/Disable strict compliance to the MLLP standard. The MLLP standard specifies START_OF_BLOCKhl7 payloadEND_OF_BLOCKEND_OF_DATA, however, some systems do not send the final END_OF_DATA byte. This setting controls whether or not the final END_OF_DATA byte is required or optional. true boolean stringPayload (common) Enable/Disable converting the payload to a String. If enabled, HL7 Payloads received from external systems will be validated converted to a String. If the charsetName property is set, that character set will be used for the conversion. If the charsetName property is not set, the value of MSH-18 will be used to determine th appropriate character set. If MSH-18 is not set, then the default ISO-8859-1 character set will be use. true boolean validatePayload (common) Enable/Disable the validation of HL7 Payloads If enabled, HL7 Payloads received from external systems will be validated (see Hl7Util.generateInvalidPayloadExceptionMessage for details on the validation). If and invalid payload is detected, a MllpInvalidMessageException (for consumers) or a MllpInvalidAcknowledgementException will be thrown. false boolean acceptTimeout (consumer) Timeout (in milliseconds) while waiting for a TCP connection TCP Server Only. 60000 int backlog (consumer) The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused. 5 Integer bindRetryInterval (consumer) TCP Server Only - The number of milliseconds to wait between bind attempts. 5000 int bindTimeout (consumer) TCP Server Only - The number of milliseconds to retry binding to a server port. 30000 int bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to receive incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. If disabled, the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions by logging them at WARN or ERROR level and ignored. true boolean lenientBind (consumer) TCP Server Only - Allow the endpoint to start before the TCP ServerSocket is bound. In some environments, it may be desirable to allow the endpoint to start before the TCP ServerSocket is bound. false boolean maxConcurrentConsumers (consumer) The maximum number of concurrent MLLP Consumer connections that will be allowed. If a new connection is received and the maximum is number are already established, the new connection will be reset immediately. 5 int reuseAddress (consumer) Enable/disable the SO_REUSEADDR socket option. false Boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut InOut ExchangePattern connectTimeout (producer) Timeout (in milliseconds) for establishing for a TCP connection TCP Client only. 30000 int idleTimeoutStrategy (producer) decide what action to take when idle timeout occurs. Possible values are : RESET: set SO_LINGER to 0 and reset the socket CLOSE: close the socket gracefully default is RESET. Enum values: RESET CLOSE RESET MllpIdleTimeoutStrategy keepAlive (producer) Enable/disable the SO_KEEPALIVE socket option. true Boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean tcpNoDelay (producer) Enable/disable the TCP_NODELAY socket option. true Boolean readTimeout (advanced) The SO_TIMEOUT value (in milliseconds) used after the start of an MLLP frame has been received. 5000 int receiveBufferSize (advanced) Sets the SO_RCVBUF option to the specified value (in bytes). 8192 Integer receiveTimeout (advanced) The SO_TIMEOUT value (in milliseconds) used when waiting for the start of an MLLP frame. 15000 int sendBufferSize (advanced) Sets the SO_SNDBUF option to the specified value (in bytes). 8192 Integer idleTimeout (tcp) The approximate idle time allowed before the Client TCP Connection will be reset. A null value or a value less than or equal to zero will disable the idle timeout. Integer 99.5. MLLP Consumer The MLLP Consumer supports receiving MLLP-framed messages and sending HL7 Acknowledgements. The MLLP Consumer can automatically generate the HL7 Acknowledgement (HL7 Application Acknowledgements only - AA, AE and AR), or the acknowledgement can be specified using the CamelMllpAcknowledgement exchange property. Additionally, the type of acknowledgement that will be generated can be controlled by setting the CamelMllpAcknowledgementType exchange property. The MLLP Consumer can read messages without sending any HL7 Acknowledgement if the automatic acknowledgement is disabled and exchange pattern is InOnly. 99.5.1. Message Headers The MLLP Consumer adds these headers on the Camel message: Key Description CamelMllpLocalAddress The local TCP Address of the Socket CamelMllpRemoteAddress The local TCP Address of the Socket CamelMllpSendingApplication MSH-3 value CamelMllpSendingFacility MSH-4 value CamelMllpReceivingApplication MSH-5 value CamelMllpReceivingFacility MSH-6 value CamelMllpTimestamp MSH-7 value CamelMllpSecurity MSH-8 value CamelMllpMessageType MSH-9 value CamelMllpEventType MSH-9-1 value CamelMllpTriggerEvent MSH-9-2 value CamelMllpMessageControlId MSH-10 value CamelMllpProcessingId MSH-11 value CamelMllpVersionId MSH-12 value CamelMllpCharset MSH-18 value All headers are String types. If a header value is missing, its value is null. 99.5.2. Exchange Properties The type of acknowledgment the MLLP Consumer generates and state of the TCP Socket can be controlled by these properties on the Camel exchange: Key Type Description CamelMllpAcknowledgement byte[] If present, this property will we sent to client as the MLLP Acknowledgement CamelMllpAcknowledgementString String If present and CamelMllpAcknowledgement is not present, this property will we sent to client as the MLLP Acknowledgement CamelMllpAcknowledgementMsaText String If neither CamelMllpAcknowledgement or CamelMllpAcknowledgementString are present and autoAck is true, this property can be used to specify the contents of MSA-3 in the generated HL7 acknowledgement CamelMllpAcknowledgementType String If neither CamelMllpAcknowledgement or CamelMllpAcknowledgementString are present and autoAck is true, this property can be used to specify the HL7 acknowledgement type (i.e. AA, AE, AR) CamelMllpAutoAcknowledge Boolean Overrides the autoAck query parameter CamelMllpCloseConnectionBeforeSend Boolean If true, the Socket will be closed before sending data CamelMllpResetConnectionBeforeSend Boolean If true, the Socket will be reset before sending data CamelMllpCloseConnectionAfterSend Boolean If true, the Socket will be closed immediately after sending data CamelMllpResetConnectionAfterSend Boolean If true, the Socket will be reset immediately after sending any data 99.6. MLLP Producer The MLLP Producer supports sending MLLP-framed messages and receiving HL7 Acknowledgements. The MLLP Producer interrogates the HL7 Acknowledgments and raises exceptions if a negative acknowledgement is received. The received acknowledgement is interrogated and an exception is raised in the event of a negative acknowledgement. The MLLP Producer can ignore acknowledgements when configured with InOnly exchange pattern. 99.6.1. Message Headers The MLLP Producer adds these headers on the Camel message: Key Description CamelMllpLocalAddress The local TCP Address of the Socket CamelMllpRemoteAddress The remote TCP Address of the Socket CamelMllpAcknowledgement The HL7 Acknowledgment byte[] received CamelMllpAcknowledgementString The HL7 Acknowledgment received, converted to a String 99.6.2. Exchange Properties The state of the TCP Socket can be controlled by these properties on the Camel exchange: Key Type Description CamelMllpCloseConnectionBeforeSend Boolean If true, the Socket will be closed before sending data CamelMllpResetConnectionBeforeSend Boolean If true, the Socket will be reset before sending data CamelMllpCloseConnectionAfterSend Boolean If true, the Socket will be closed immediately after sending data CamelMllpResetConnectionAfterSend Boolean If true, the Socket will be reset immediately after sending any data 99.7. Spring Boot Auto-Configuration The component supports 31 options, which are listed below. Name Description Default Type camel.component.mllp.accept-timeout Timeout (in milliseconds) while waiting for a TCP connection TCP Server Only. 60000 Integer camel.component.mllp.auto-ack Enable/Disable the automatic generation of a MLLP Acknowledgement MLLP Consumers only. true Boolean camel.component.mllp.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mllp.backlog The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused. 5 Integer camel.component.mllp.bind-retry-interval TCP Server Only - The number of milliseconds to wait between bind attempts. 5000 Integer camel.component.mllp.bind-timeout TCP Server Only - The number of milliseconds to retry binding to a server port. 30000 Integer camel.component.mllp.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to receive incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. If disabled, the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions by logging them at WARN or ERROR level and ignored. true Boolean camel.component.mllp.charset-name Sets the default charset to use. String camel.component.mllp.configuration Sets the default configuration to use when creating MLLP endpoints. The option is a org.apache.camel.component.mllp.MllpConfiguration type. MllpConfiguration camel.component.mllp.connect-timeout Timeout (in milliseconds) for establishing for a TCP connection TCP Client only. 30000 Integer camel.component.mllp.default-charset Set the default character set to use for byte to/from String conversions. ISO-8859-1 String camel.component.mllp.enabled Whether to enable auto configuration of the mllp component. This is enabled by default. Boolean camel.component.mllp.exchange-pattern Sets the exchange pattern when the consumer creates an exchange. ExchangePattern camel.component.mllp.hl7-headers Enable/Disable the automatic generation of message headers from the HL7 Message MLLP Consumers only. true Boolean camel.component.mllp.idle-timeout The approximate idle time allowed before the Client TCP Connection will be reset. A null value or a value less than or equal to zero will disable the idle timeout. Integer camel.component.mllp.idle-timeout-strategy decide what action to take when idle timeout occurs. Possible values are : RESET: set SO_LINGER to 0 and reset the socket CLOSE: close the socket gracefully default is RESET. MllpIdleTimeoutStrategy camel.component.mllp.keep-alive Enable/disable the SO_KEEPALIVE socket option. true Boolean camel.component.mllp.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mllp.lenient-bind TCP Server Only - Allow the endpoint to start before the TCP ServerSocket is bound. In some environments, it may be desirable to allow the endpoint to start before the TCP ServerSocket is bound. false Boolean camel.component.mllp.log-phi Whether to log PHI. true Boolean camel.component.mllp.log-phi-max-bytes Set the maximum number of bytes of PHI that will be logged in a log entry. 5120 Integer camel.component.mllp.max-concurrent-consumers The maximum number of concurrent MLLP Consumer connections that will be allowed. If a new connection is received and the maximum is number are already established, the new connection will be reset immediately. 5 Integer camel.component.mllp.read-timeout The SO_TIMEOUT value (in milliseconds) used after the start of an MLLP frame has been received. 5000 Integer camel.component.mllp.receive-buffer-size Sets the SO_RCVBUF option to the specified value (in bytes). 8192 Integer camel.component.mllp.receive-timeout The SO_TIMEOUT value (in milliseconds) used when waiting for the start of an MLLP frame. 15000 Integer camel.component.mllp.require-end-of-data Enable/Disable strict compliance to the MLLP standard. The MLLP standard specifies START_OF_BLOCKhl7 payloadEND_OF_BLOCKEND_OF_DATA, however, some systems do not send the final END_OF_DATA byte. This setting controls whether or not the final END_OF_DATA byte is required or optional. true Boolean camel.component.mllp.reuse-address Enable/disable the SO_REUSEADDR socket option. false Boolean camel.component.mllp.send-buffer-size Sets the SO_SNDBUF option to the specified value (in bytes). 8192 Integer camel.component.mllp.string-payload Enable/Disable converting the payload to a String. If enabled, HL7 Payloads received from external systems will be validated converted to a String. If the charsetName property is set, that character set will be used for the conversion. If the charsetName property is not set, the value of MSH-18 will be used to determine th appropriate character set. If MSH-18 is not set, then the default ISO-8859-1 character set will be use. true Boolean camel.component.mllp.tcp-no-delay Enable/disable the TCP_NODELAY socket option. true Boolean camel.component.mllp.validate-payload Enable/Disable the validation of HL7 Payloads If enabled, HL7 Payloads received from external systems will be validated (see Hl7Util.generateInvalidPayloadExceptionMessage for details on the validation). If and invalid payload is detected, a MllpInvalidMessageException (for consumers) or a MllpInvalidAcknowledgementException will be thrown. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mllp-starter</artifactId> </dependency>",
"mllp:hostname:port"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mllp-component-starter |
8.2. Using Roles | 8.2. Using Roles Roles are an entry grouping mechanism that unify the static and dynamic groups described in the sections. Roles are designed to be more efficient and easier to use for applications. For example, an application can get the list of roles of which an entry is a member by querying the entry itself, rather than selecting a group and browsing the members list of several groups. 8.2.1. About Roles Red Hat has two kinds of groups. Static groups have a finite and defined list of members. Dynamic groups used filters to recognize which entries are members of the group, so the group membership is constantly changed as the entries which match the group filter change. (Both kinds of groups are described in Section 8.1, "Using Groups" .) Roles are a sort of hybrid group, behaving as both a static and a dynamic group. With a group, entries are added to a group entry as members. With a role, the role attribute is added to an entry and then that attribute is used to identify members in the role entry automatically. Role members are entries that possess the role. Members can be specified either explicitly or dynamically. How role membership is specified depends upon the type of role. Directory Server supports three types of roles: Managed roles have an explicit enumerated list of members. Filtered roles are assigned entries to the role depending upon the attribute contained by each entry, specified in an LDAP filter. Entries that match the filter possess the role. Nested roles are roles that contain other roles. Managed roles can do everything that can normally be done with static groups. The role members can be filtered using filtered roles, similarly to the filtering with dynamic groups. Roles are easier to use than groups, more flexible in their implementation, and reduce client complexity. When a role is created, determine whether a user can add themselves or remove themselves from the role. See Section 8.2.4, "Using Roles Securely" for more information about roles and access control. Note Evaluating roles is more resource-intensive for the Directory Server than evaluating groups because the server does the work for the client application. With roles, the client application can check role membership by searching for the nsRole attribute. The nsRole attribute is a computed attribute, which identifies to which roles an entry belongs; the nsRole attribute is not stored with the entry itself. From the client application point of view, the method for checking membership is uniform and is performed on the server side. Considerations for using roles are covered in the Red Hat Directory Server Deployment Guide . 8.2.2. Managing Roles Using the Command Line You can view, create, and delete roles using the command line. 8.2.2.1. Creating a Managed Role Managed roles have an explicit enumerated list of members. Managed roles are added to entries by adding the nsRoleDN attribute to the entry. 8.2.2.1.1. Creating Managed Roles through the Command Line Roles inherit from the ldapsubentry object class, which is defined in the ITU X.509 standard. In addition, each managed role requires two object classes that inherit from the nsRoleDefinition object class: nsSimpleRoleDefinition nsManagedRoleDefinition A managed role also allows an optional description attribute. Members of a managed role have the nsRoleDN attribute in their entry. This example creates a role which can be assigned to the marketing department. Use ldapmodify with the -a option to add the managed role entry. The new entry must contain the nsManagedRoleDefinition object class, which in turn inherits from the LdapSubEntry , nsRoleDefinition , and nsSimpleRoleDefinition object classes. Assign the role to the marketing staff members, one by one, using ldapmodify : The nsRoleDN attribute in the entry indicates that the entry is a member of a managed role, cn=Marketing,ou=people,dc=example,dc=com . 8.2.2.2. Creating a Filtered Role Entries are assigned to a filtered role depending whether the entry possesses a specific attribute defined in the role. The role definition specifies an LDAP filter for the target attributes. Entries that match the filter possess (are members of) the role. 8.2.2.2.1. Creating a Filtered Role through the Command Line Roles inherit from the ldapsubentry object class, which is defined in the ITU X.509 standard. In addition, each filtered role requires two object classes that inherit from the nsRoleDefinition object class: nsComplexRoleDefinition nsFilteredRoleDefinition A filtered role entry also requires the nsRoleFilter attribute to define the LDAP filter to determine role members. Optionally, the role can take a description attribute. Members of a filtered role are entries that match the filter specified in the nsRoleFilter attribute. This example creates a filtered role which is applied to all sales managers. Run ldapmodify with the -a option to add a new entry. Create the filtered role entry. The role entry has the nsFilteredRoleDefinition object class, which inherits from the LdapSubEntry , nsRoleDefinition , and nsComplexRoleDefinition object classes. The nsRoleFilter attribute sets a filter for o (organization) attributes that contain a value of sales managers . The following entry matches the filter (possesses the o attribute with the value sales managers ), and, therefore, it is a member of this filtered role automatically: 8.2.2.3. Creating a Nested Role Nested roles are roles that contain other roles. Before it is possible to create a nested role, another role must exist. The roles nested within the nested role are specified using the nsRoleDN attribute. 8.2.2.3.1. Creating Nested Role through the Command Line Roles inherit from the ldapsubentry object class, which is defined in the ITU X.509 standard. In addition, each nested role requires two object classes that inherit from the nsRoleDefinition object class: nsComplexRoleDefinition nsNestedRoleDefinition A nested role entry also requires the nsRoleDN attribute to identify the roles to nest within the container role. Optionally, the role can take a description attribute. Members of a nested role are members of the roles specified in the nsRoleDN attributes of the nested role definition entry. This example creates a single role out of the managed marketing role and filtered sales manager role. Run ldapmodify with the -a option to add a new entry. Create the nested role entry. The nested role has four object classes: nsNestedRoleDefinition LDAPsubentry (inherited) nsRoleDefinition (inherited) nsComplexRoleDefinition (inherited) The nsRoleDN attributes contain the DNs for both the marketing managed role and the sales managers filtered role. Both of the users in the examples, Bob and Pat, are members of this new nested role. 8.2.2.4. Viewing Roles for an Entry through the Command Line Role assignments are not returned automatically through the command line. The nsRole attribute is an operational attribute. In LDAP, operational attributes must be requested explicitly. They are not returned by default with the regular attributes in the schema of the entry. You can either explicitly request single operational attributes by listing them or use + to output all operational attributes for result objects. For example, this ldapsearch command returns the list of roles of which uid= user_name is a member, in addition to the regular attributes for the entry: 8.2.2.5. About Deleting Roles Deleting a role deletes the role entry but does not delete the nsRoleDN attribute for each role member. To delete the nsRoleDN attribute for each role member, enable the Referential Integrity plug-in, and configure it to manage the nsRoleDN attribute. For more information on the Referential Integrity plug-in, see Chapter 5, Maintaining Referential Integrity . 8.2.3. Managing Roles in Directory Server Using the LDAP Browser A role is a grouping mechanism that unifies static and dynamic groups. 8.2.3.1. Creating a role in the LDAP browser You can create a role for a Red Hat Directory Server entry by using the LDAP Browser wizard in the web console. Prerequisites Access to the web console. A parent entry exists in the Red Hat Directory Server. Procedure Log in to the web console and click Red Hat Directory Server . After the web console loads the Red Hat Directory Server interface, click LDAP Browser . Select an LDAP entry and click the Options menu. From the drop-down menu, select New and click Create a new role . Follow the steps in the wizard and click the button after you complete each step. To create the role, review the role settings in the Create Role step and click the Create button . You can click the Back button to modify the role settings or click the Cancel button to cancel the role creation. To close the wizard window, click the Finish button. Verification Expand the LDAP entry and verify the new role appears among the entry parameters. 8.2.3.2. Modifying a Role in the LDAP browser You can modify the role parameters for a Red Hat Directory Server entry using the LDAP Browser in the web console. Prerequisites Access to the web console. A parent entry exists in the Red Hat Directory Server. Procedure Log in to the web console and click Red Hat Directory Server . After the web console loads the Red Hat Directory Server interface, click LDAP Browser . Expand the LDAP entry and select the role you are modifying. Click the Options menu and select Edit to modify the parameters of the role or Rename to rename the role. In the wizard window modify the necessary parameters and click after each step until you observe the LDIF Statements step. Check the updated parameters and click Modify Entry or Change Entry Name . To close the wizard window, click the Finish button. Verification Expand the LDAP entry and verify the updated parameters are listed for the role. 8.2.3.3. Deleting a Role in the LDAP browser You can delete a role from the Red Hat Directory Server entry by using the LDAP Browser in the web console. Prerequisites Access to the web console. A parent entry exists in the Red Hat Directory Server. Procedure Log in to the web console and click Red Hat Directory Server . After the web console loads the Red Hat Directory Server interface, click LDAP Browser . Expand the LDAP entry and select the role which you want to delete. Open the Options menu and select Delete . Verify the data about the role you want to delete and click the button until you reach the Deletion step. Toggle the switch to the Yes, I'm sure position and click the Delete button. To close the wizard window, click the Finish button. Verification Expand the LDAP entry and verify the role is no longer a part of the entry details. 8.2.4. Using Roles Securely Not every role is suitable for use in a security context. When creating a new role, consider how easily the role can be assigned to and removed from an entry. Sometimes it is appropriate for users to be able to add or remove themselves easily from a role. For example, if there is an interest group role called Mountain Biking , interested users should be able to add themselves or remove themselves easily. However, it is inappropriate to have such open roles for some security situations. One potential security risk is inactivating user accounts by inactivating roles. Inactive roles have special ACIs defined for their suffix. If an administrator allows users to add and remove themselves from roles freely, then in some circumstance, they may be able to remove themselves from an inactive role to prevent their accounts from being locked. For example, user A possesses the managed role, MR . The MR role has been locked using account inactivation. This means that user A cannot bind to the server because the nsAccountLock attribute is computed as true for that user. However, if user A was already bound to Directory Server and noticed that he is now locked through the MR role, the user can remove the nsRoleDN attribute from his entry and unlock himself if there are no ACIs preventing him. To prevent users from removing the nsRoleDN attribute, use the following ACIs depending upon the type of role being used. Managed roles. For entries that are members of a managed role, use the following ACI to prevent users from unlocking themselves by removing the appropriate nsRoleDN : Filtered roles. The attributes that are part of the filter should be protected so that the user cannot relinquish the filtered role by modifying an attribute. The user should not be allowed to add, delete, or modify the attribute used by the filtered role. If the value of the filter attribute is computed, then all attributes that can modify the value of the filter attribute should be protected in the same way. Nested roles. A nested role is comprised of filtered and managed roles, so both ACIs should be considered for modifying the attributes ( nsRoleDN or something else) of the roles that comprise the nested role. For more information about account inactivation, see Section 20.16, "Manually Inactivating Users and Roles" . | [
"dn: cn=Marketing,ou=people,dc=example,dc=com objectclass: top objectclass: LdapSubEntry objectclass: nsRoleDefinition objectclass: nsSimpleRoleDefinition objectclass: nsManagedRoleDefinition cn: Marketing description: managed role for marketing staff",
"dn: cn=Bob,ou=people,dc=example,dc=com changetype: modify add: nsRoleDN nsRoleDN: cn=Marketing,ou=people,dc=example,dc=com",
"dn: cn=SalesManagerFilter,ou=people,dc=example,dc=com changetype: add objectclass: top objectclass: LDAPsubentry objectclass: nsRoleDefinition objectclass: nsComplexRoleDefinition objectclass: nsFilteredRoleDefinition cn: SalesManagerFilter nsRoleFilter: o=sales managers Description: filtered role for sales managers",
"dn: cn=Pat Smith,ou=people,dc=example,dc=com objectclass: person cn: Pat sn: Smith userPassword: secret o: sales managers",
"dn: cn=MarketingSales,ou=people,dc=example,dc=com objectclass: top objectclass: LDAPsubentry objectclass: nsRoleDefinition objectclass: nsComplexRoleDefinition objectclass: nsNestedRoleDefinition cn: MarketingSales nsRoleDN: cn=SalesManagerFilter,ou=people,dc=example,dc=com nsRoleDN: cn=Marketing,ou=people,dc=example,dc=com",
"ldapsearch -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -b \"dc=example,dc=com\" -s sub -x \"(uid= user_name )\"\" \\* nsRole dn: uid= user_name ,ou=people,dc=example,dc=com nsRole: cn=Role for Managers,dc=example,dc=com nsRole: cn=Role for Accounting,dc=example,dc=com",
"aci: (targetattr=\"nsRoleDN\") (targattrfilters= add=nsRoleDN:(!(nsRoleDN=cn=AdministratorRole,dc=example,dc=com)), del=nsRoleDN:(!(nsRoleDN=cn=nsManagedDisabledRole,dc=example,dc=com))) (version3.0;acl \"allow mod of nsRoleDN by self but not to critical values\"; allow(write) userdn=ldap:///self;)"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/advanced_entry_management-using_roles |
Installing on bare metal | Installing on bare metal OpenShift Container Platform 4.17 Installing OpenShift Container Platform on bare metal Red Hat OpenShift Documentation Team | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false",
"cat <nmstate_configuration>.yaml | base64 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml",
"oc edit mc <machineconfig_custom_resource_name>",
"oc apply -f ./extraworker-secret.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret",
"oc project openshift-machine-api",
"oc get machinesets",
"oc scale machineset <machineset_name> --replicas=<n> 1",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection",
"coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img",
"coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img",
"coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"variant: openshift version: 4.17.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target",
"butane --pretty --strict multipath-config.bu > multipath-config.ign",
"iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login",
"coreos-installer install /dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 1 --append-karg rd.iscsi.initiator=<initiator_iqn> \\ 2 --append.karg netroot=<target_iqn> \\ 3 --console ttyS0,115200n8 --ignition-file <path_to_file>",
"iscsiadm --mode node --logoutall=all",
"iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --append-karg rd.iscsi.firmware=1 \\ 2 --append-karg rd.multipath=default \\ 3 --console ttyS0 --ignition-file <path_to_file>",
"iscsiadm --mode node --logout=all",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.17 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false",
"cat <nmstate_configuration>.yaml | base64 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml",
"oc edit mc <machineconfig_custom_resource_name>",
"oc apply -f ./extraworker-secret.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret",
"oc project openshift-machine-api",
"oc get machinesets",
"oc scale machineset <machineset_name> --replicas=<n> 1",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection",
"coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img",
"coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img",
"coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"variant: openshift version: 4.17.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target",
"butane --pretty --strict multipath-config.bu > multipath-config.ign",
"iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login",
"coreos-installer install /dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 1 --append-karg rd.iscsi.initiator=<initiator_iqn> \\ 2 --append.karg netroot=<target_iqn> \\ 3 --console ttyS0,115200n8 --ignition-file <path_to_file>",
"iscsiadm --mode node --logoutall=all",
"iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --append-karg rd.iscsi.firmware=1 \\ 2 --append-karg rd.multipath=default \\ 3 --console ttyS0 --ignition-file <path_to_file>",
"iscsiadm --mode node --logout=all",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"interfaces: - name: enp2s0 1 type: ethernet 2 state: up 3 ipv4: enabled: false 4 ipv6: enabled: false - name: br-ex type: ovs-bridge state: up ipv4: enabled: false dhcp: false ipv6: enabled: false dhcp: false bridge: port: - name: enp2s0 5 - name: br-ex - name: br-ex type: ovs-interface state: up copy-mac-from: enp2s0 ipv4: enabled: true dhcp: true ipv6: enabled: false dhcp: false",
"cat <nmstate_configuration>.yaml | base64 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 10-br-ex-worker 2 spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<base64_encoded_nmstate_configuration> 3 mode: 0644 overwrite: true path: /etc/nmstate/openshift/cluster.yml",
"oc edit mc <machineconfig_custom_resource_name>",
"oc apply -f ./extraworker-secret.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost spec: preprovisioningNetworkDataName: ostest-extraworker-0-network-config-secret",
"oc project openshift-machine-api",
"oc get machinesets",
"oc scale machineset <machineset_name> --replicas=<n> 1",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.17.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.17-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.17-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.17-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.17/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4",
"coreos-installer iso reset rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection",
"coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso",
"coreos-installer iso customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.iso rhcos-<version>-live.x86_64.iso",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img",
"[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto",
"[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond",
"[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond",
"coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img",
"coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/disk/by-path/<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.initiator=<initiator_iqn> \\ 5 --dest-karg-append netroot=<target_iqn> \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img",
"coreos-installer pxe customize --pre-install mount-iscsi.sh \\ 1 --post-install unmount-iscsi.sh \\ 2 --dest-device /dev/mapper/mpatha \\ 3 --dest-ignition config.ign \\ 4 --dest-karg-append rd.iscsi.firmware=1 \\ 5 --dest-karg-append rd.multipath=default \\ 6 -o custom.img rhcos-<version>-live-initramfs.x86_64.img",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw",
"oc debug node/ip-10-0-141-105.ec2.internal",
"Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit",
"variant: openshift version: 4.17.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target",
"butane --pretty --strict multipath-config.bu > multipath-config.ign",
"iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login",
"coreos-installer install /dev/disk/by-path/ip-<IP_address>:<port>-iscsi-<target_iqn>-lun-<lun> \\ 1 --append-karg rd.iscsi.initiator=<initiator_iqn> \\ 2 --append.karg netroot=<target_iqn> \\ 3 --console ttyS0,115200n8 --ignition-file <path_to_file>",
"iscsiadm --mode node --logoutall=all",
"iscsiadm --mode discovery --type sendtargets --portal <IP_address> \\ 1 --login",
"mpathconf --enable && systemctl start multipathd.service",
"coreos-installer install /dev/mapper/mpatha \\ 1 --append-karg rd.iscsi.firmware=1 \\ 2 --append-karg rd.multipath=default \\ 3 --console ttyS0 --ignition-file <path_to_file>",
"iscsiadm --mode node --logout=all",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.30.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.17 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.17.0 True False False 19m baremetal 4.17.0 True False False 37m cloud-credential 4.17.0 True False False 40m cluster-autoscaler 4.17.0 True False False 37m config-operator 4.17.0 True False False 38m console 4.17.0 True False False 26m csi-snapshot-controller 4.17.0 True False False 37m dns 4.17.0 True False False 37m etcd 4.17.0 True False False 36m image-registry 4.17.0 True False False 31m ingress 4.17.0 True False False 30m insights 4.17.0 True False False 31m kube-apiserver 4.17.0 True False False 26m kube-controller-manager 4.17.0 True False False 36m kube-scheduler 4.17.0 True False False 36m kube-storage-version-migrator 4.17.0 True False False 37m machine-api 4.17.0 True False False 29m machine-approver 4.17.0 True False False 37m machine-config 4.17.0 True False False 36m marketplace 4.17.0 True False False 37m monitoring 4.17.0 True False False 29m network 4.17.0 True False False 38m node-tuning 4.17.0 True False False 37m openshift-apiserver 4.17.0 True False False 32m openshift-controller-manager 4.17.0 True False False 30m openshift-samples 4.17.0 True False False 32m operator-lifecycle-manager 4.17.0 True False False 37m operator-lifecycle-manager-catalog 4.17.0 True False False 37m operator-lifecycle-manager-packageserver 4.17.0 True False False 32m service-ca 4.17.0 True False False 38m storage 4.17.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: \"Disabled\" watchAllNamespaces: false",
"oc create -f provisioning.yaml",
"provisioning.metal3.io/provisioning-configuration created",
"oc get pods -n openshift-machine-api",
"NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-678c476f4c-jjdn5 2/2 Running 0 5d21h cluster-baremetal-operator-6866f7b976-gmvgh 2/2 Running 0 5d21h control-plane-machine-set-operator-7d8566696c-bh4jz 1/1 Running 0 5d21h ironic-proxy-64bdw 1/1 Running 0 5d21h ironic-proxy-rbggf 1/1 Running 0 5d21h ironic-proxy-vj54c 1/1 Running 0 5d21h machine-api-controllers-544d6849d5-tgj9l 7/7 Running 1 (5d21h ago) 5d21h machine-api-operator-5c4ff4b86d-6fjmq 2/2 Running 0 5d21h metal3-6d98f84cc8-zn2mx 5/5 Running 0 5d21h metal3-image-customization-59d745768d-bhrp7 1/1 Running 0 5d21h",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret 1 namespace: openshift-machine-api type: Opaque stringData: nmstate: | 2 interfaces: 3 - name: <nic1_name> 4 type: ethernet state: up ipv4: address: - ip: <ip_address> 5 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 6 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 7 next-hop-interface: <next_hop_nic1_name> 8 --- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 9 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 10 bmc: address: <protocol>://<bmc_url> 11 credentialsName: openshift-worker-<num>-bmc-secret disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 12 preprovisioningNetworkDataName: openshift-worker-<num>-network-config-secret",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-network-config-secret namespace: openshift-machine-api # interfaces: - name: <nic_name> type: ethernet state: up ipv4: enabled: false ipv6: enabled: false",
"--- apiVersion: v1 kind: Secret metadata: name: openshift-worker-<num>-bmc-secret 1 namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> 2 password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-worker-<num> namespace: openshift-machine-api spec: online: true bootMACAddress: <nic1_mac_address> 3 bmc: address: <protocol>://<bmc_url> 4 credentialsName: openshift-worker-<num>-bmc disableCertificateVerification: false customDeploy: method: install_coreos userData: name: worker-user-data-managed namespace: openshift-machine-api rootDeviceHints: deviceName: <root_device_hint> 5",
"oc create -f bmh.yaml",
"secret/openshift-worker-<num>-network-config-secret created secret/openshift-worker-<num>-bmc-secret created baremetalhost.metal3.io/openshift-worker-<num> created",
"oc -n openshift-machine-api get bmh openshift-worker-<num>",
"NAME STATE CONSUMER ONLINE ERROR openshift-worker-<num> provisioned true",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-gfm9f 33s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-o perator:node-bootstrapper <none> Pending",
"oc adm certificate approve <csr_name>",
"certificatesigningrequest.certificates.k8s.io/<csr_name> approved",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION app1 Ready worker 47s v1.24.0+dc5a2fd controller1 Ready master,worker 2d22h v1.24.0+dc5a2fd",
"--- apiVersion: v1 kind: Secret metadata: name: controller1-bmc namespace: openshift-machine-api type: Opaque data: username: <base64_of_uid> password: <base64_of_pwd> --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: controller1 namespace: openshift-machine-api spec: bmc: address: <protocol>://<bmc_url> 1 credentialsName: \"controller1-bmc\" bootMACAddress: <nic1_mac_address> customDeploy: method: install_coreos externallyProvisioned: true 2 online: true userData: name: controller-user-data-managed namespace: openshift-machine-api",
"oc create -f controller.yaml",
"secret/controller1-bmc created baremetalhost.metal3.io/controller1 created",
"oc get bmh -A",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 13s",
"oc adm drain app1 --force --ignore-daemonsets=true",
"node/app1 cordoned WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-node-tuning-operator/tuned-tvthg, openshift-dns/dns- default-9q6rz, openshift-dns/node-resolver-zvt42, openshift-image-registry/node-ca-mzxth, openshift-ingress-cana ry/ingress-canary-qq5lf, openshift-machine-config-operator/machine-config-daemon-v79dm, openshift-monitoring/nod e-exporter-2vn59, openshift-multus/multus-additional-cni-plugins-wssvj, openshift-multus/multus-fn8tg, openshift -multus/network-metrics-daemon-5qv55, openshift-network-diagnostics/network-check-target-jqxn2, openshift-ovn-ku bernetes/ovnkube-node-rsvqg evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766965-258vp evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766950-kg5mk evicting pod openshift-operator-lifecycle-manager/collect-profiles-27766935-stf4s pod/collect-profiles-27766965-258vp evicted pod/collect-profiles-27766950-kg5mk evicted pod/collect-profiles-27766935-stf4s evicted node/app1 drained",
"oc edit bmh -n openshift-machine-api <host_name>",
"customDeploy: method: install_coreos",
"oc get bmh -A",
"NAMESPACE NAME STATE CONSUMER ONLINE ERROR AGE openshift-machine-api controller1 externally provisioned true 58m openshift-machine-api worker1 deprovisioning true 57m",
"oc delete bmh -n openshift-machine-api <bmh_name>",
"oc delete node <node_name>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION controller1 Ready master,worker 2d23h v1.24.0+dc5a2fd",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd01::/48 hostPrefix: 64",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16 - fd02::/112",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/installing_on_bare_metal/index |
Using JBoss EAP XP 3.0.0 | Using JBoss EAP XP 3.0.0 Red Hat JBoss Enterprise Application Platform 7.4 For Use with JBoss EAP XP 3.0.0 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_xp_3.0.0/index |
Chapter 6. Config [operator.openshift.io/v1] | Chapter 6. Config [operator.openshift.io/v1] Description Config specifies the behavior of the config operator which is responsible for creating the initial configuration of other components on the cluster. The operator also handles installation, migration or synchronization of cloud configurations for AWS and Azure cloud based clusters Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Config Operator. status object status defines the observed status of the Config Operator. 6.1.1. .spec Description spec is the specification of the desired behavior of the Config Operator. Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 6.1.2. .status Description status defines the observed status of the Config Operator. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 6.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 6.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string reason string status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 6.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 6.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Required group name namespace resource Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 6.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/configs DELETE : delete collection of Config GET : list objects of kind Config POST : create a Config /apis/operator.openshift.io/v1/configs/{name} DELETE : delete a Config GET : read the specified Config PATCH : partially update the specified Config PUT : replace the specified Config /apis/operator.openshift.io/v1/configs/{name}/status GET : read status of the specified Config PATCH : partially update status of the specified Config PUT : replace status of the specified Config 6.2.1. /apis/operator.openshift.io/v1/configs HTTP method DELETE Description delete collection of Config Table 6.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Config Table 6.2. HTTP responses HTTP code Reponse body 200 - OK ConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a Config Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.4. Body parameters Parameter Type Description body Config schema Table 6.5. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 202 - Accepted Config schema 401 - Unauthorized Empty 6.2.2. /apis/operator.openshift.io/v1/configs/{name} Table 6.6. Global path parameters Parameter Type Description name string name of the Config HTTP method DELETE Description delete a Config Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Config Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Config Table 6.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Config Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. Body parameters Parameter Type Description body Config schema Table 6.14. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty 6.2.3. /apis/operator.openshift.io/v1/configs/{name}/status Table 6.15. Global path parameters Parameter Type Description name string name of the Config HTTP method GET Description read status of the specified Config Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Config Table 6.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.18. HTTP responses HTTP code Reponse body 200 - OK Config schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Config Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body Config schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK Config schema 201 - Created Config schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operator_apis/config-operator-openshift-io-v1 |
Chapter 3. Customizing Red Hat OpenStack Services on OpenShift Observability | Chapter 3. Customizing Red Hat OpenStack Services on OpenShift Observability Use observability with Red Hat OpenStack Services on OpenShift (RHOSO) to get insight into the metrics, logs, and alerts from your deployment. The observability architecture in RHOSO is composed of services within OpenShift, as well as services on your Compute nodes that expose metrics, logs, and alerts.You can use the OpenShift observability ecosystem for insight into the RHOSO environment. Additionally, you have access to the logging infrastructure for collecting, storing, and searching through logs. RHOSO services such as ceilometer and sg-core make metrics from your compute nodes and associated virtual infrastructure available to the OpenShift Observability framework. 3.1. Configuring Red Hat OpenStack Services on OpenShift Observability The Telemetry service (ceilometer, prometheus) is enabled by default in a Red Hat OpenStack Services on OpenShift (RHOSO) deployment. You can configure observability by editing the openstack_control_plane.yaml CR file. Prerequisites Optional: If you enable logging, the Cluster Logging Operator is installed from OperatorHub . A LokiStack instance must be running. For more information, see the Logging Quick start and Storing logs with LokiStack . A ClusterLogForwarder instance must be configured with a Syslog receiver. For more information, see Configuring log forwarding and the Syslog receiver configuration example . To configure a dashboard for the logs, see Visualization for logging . Note You do not need these Operators to expose and query OpenStack metrics in Prometheus format. If you do not disable ceilometer, then a Prometheus metrics exporter is created and exposed from inside the cluster at the following URL: http://ceilometer-internal.openstack.svc:3000/metrics Procedure Open the OpenStackControlPlane CR definition file, openstack_control_plane.yaml , on your workstation. Update the telemetry section based on the needs of your environment: 1 Use the scrapeInterval field to control the amount of time that passes before new metrics are gathered. Changing this parameter can affect performance. 2 Use the retention field to adjust the length of time telemetry metrics are stored. This field affects the amount of storage required. 3 Use the pvcStorageRequest field to change the amount of storage to be allocated for the Prometheus time series database. 4 Use the ipaddr field to set the IP address that rsyslog sends messages to. Ensure that the IP address is reachable from the Compute node. The default IP address is the vIP for internalapi , which is 172.17.0.80. Ensure that ipaddr and loadBalancerIPs have the same IP address, so that the client and server can communicate. Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Enable the logging service by editing the OpenStackDataplaneNodeSet CR. In the services list, add logging after telemetry : Optional: Enable the telemetry-power-monitoring service by editing the OpenStackDataplaneNodeSet CR. In the services list, add telemetry-power-monitoring after telemetry : Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . For more information about deploying data plane services, see Deploying the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide. Optional: Confirm that the control plane is deployed by reviewing the pods in the openstack namespace for each of your cells: The control plane is deployed when all the pods are either completed or running. Verification Access the remote shell for the OpenStackClient pod from your workstation: Confirm that you can query prometheus and that the scrape endpoints are active: Example output: Note Each entry in the value field should be "1" when there are active workloads scheduled on the cluster, except for the prometheus container. The prometheus container reports a value of "0" due to TLS, which is enabled by default. You can find the openstack-telemetry-operator dashboards by clicking Observe and then Dashboards in the RHOCP console. For more information about RHOCP dashboards, see Reviewing monitoring dashboards as a cluster administrator in the RHOCP Monitoring Guide. Optional: Verify that you deployed the telemetry-power-monitoring service. Check for ceilometer_agent_ipmi and kepler containers in the dataplane nodes: Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Additional resource Installing Logging | [
"telemetry: enabled: true template: metricStorage: enabled: true dashboardsEnabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s 1 storage: strategy: persistent retention: 24h 2 persistent: pvcStorageRequest: 20G 3 autoscaling: enabled: false aodh: databaseAccount: aodh databaseInstance: openstack secret: osp-secret heatInstance: heat ceilometer: enabled: true secret: osp-secret logging: enabled: false ipaddr: 172.17.0.80 4 annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started",
"- - telemetry - logging -",
"- - telemetry - telemetry-power-monitoring -",
"oc get pods -n openstack",
"oc rsh -n openstack openstackclient",
"openstack metric query up --disable-rbac -c container -c instance -c value",
"+-----------------+------------------------+-------+ | container | instance | value | +-----------------+------------------------+-------+ | alertmanager | 10.217.1.112:9093 | 1 | | prometheus | 10.217.1.63:9090 | 0 | | proxy-httpd | 10.217.1.52:3000 | 1 | | | 192.168.122.100:9100 | 1 | | | 192.168.122.101:9100 | 1 | +-----------------+------------------------+-------+",
"podman ps | grep -i -e ceilometer_agent_ipmi -e kepler"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/customizing_the_red_hat_openstack_services_on_openshift_deployment/rhoso-observability_custom_dataplane |
Chapter 7. Understanding OpenShift Container Platform development | Chapter 7. Understanding OpenShift Container Platform development To fully leverage the capability of containers when developing and running enterprise-quality applications, ensure your environment is supported by tools that allow containers to be: Created as discrete microservices that can be connected to other containerized, and non-containerized, services. For example, you might want to join your application with a database or attach a monitoring application to it. Resilient, so if a server crashes or needs to go down for maintenance or to be decommissioned, containers can start on another machine. Automated to pick up code changes automatically and then start and deploy new versions of themselves. Scaled up, or replicated, to have more instances serving clients as demand increases and then spun down to fewer instances as demand declines. Run in different ways, depending on the type of application. For example, one application might run once a month to produce a report and then exit. Another application might need to run constantly and be highly available to clients. Managed so you can watch the state of your application and react when something goes wrong. Containers' widespread acceptance, and the resulting requirements for tools and methods to make them enterprise-ready, resulted in many options for them. The rest of this section explains options for assets you can create when you build and deploy containerized Kubernetes applications in OpenShift Container Platform. It also describes which approaches you might use for different kinds of applications and development requirements. 7.1. About developing containerized applications You can approach application development with containers in many ways, and different approaches might be more appropriate for different situations. To illustrate some of this variety, the series of approaches that is presented starts with developing a single container and ultimately deploys that container as a mission-critical application for a large enterprise. These approaches show different tools, formats, and methods that you can employ with containerized application development. This topic describes: Building a simple container and storing it in a registry Creating a Kubernetes manifest and saving it to a Git repository Making an Operator to share your application with others 7.2. Building a simple container You have an idea for an application and you want to containerize it. First you require a tool for building a container, like buildah or docker, and a file that describes what goes in your container, which is typically a Dockerfile . , you require a location to push the resulting container image so you can pull it to run anywhere you want it to run. This location is a container registry. Some examples of each of these components are installed by default on most Linux operating systems, except for the Dockerfile, which you provide yourself. The following diagram displays the process of building and pushing an image: Figure 7.1. Create a simple containerized application and push it to a registry If you use a computer that runs Red Hat Enterprise Linux (RHEL) as the operating system, the process of creating a containerized application requires the following steps: Install container build tools: RHEL contains a set of tools that includes podman, buildah, and skopeo that you use to build and manage containers. Create a Dockerfile to combine base image and software: Information about building your container goes into a file that is named Dockerfile . In that file, you identify the base image you build from, the software packages you install, and the software you copy into the container. You also identify parameter values like network ports that you expose outside the container and volumes that you mount inside the container. Put your Dockerfile and the software you want to containerize in a directory on your RHEL system. Run buildah or docker build: Run the buildah build-using-dockerfile or the docker build command to pull your chosen base image to the local system and create a container image that is stored locally. You can also build container images without a Dockerfile by using buildah. Tag and push to a registry: Add a tag to your new container image that identifies the location of the registry in which you want to store and share your container. Then push that image to the registry by running the podman push or docker push command. Pull and run the image: From any system that has a container client tool, such as podman or docker, run a command that identifies your new image. For example, run the podman run <image_name> or docker run <image_name> command. Here <image_name> is the name of your new container image, which resembles quay.io/myrepo/myapp:latest . The registry might require credentials to push and pull images. For more details on the process of building container images, pushing them to registries, and running them, see Custom image builds with Buildah . 7.2.1. Container build tool options Building and managing containers with buildah, podman, and skopeo results in industry standard container images that include features specifically tuned for deploying containers in OpenShift Container Platform or other Kubernetes environments. These tools are daemonless and can run without root privileges, requiring less overhead to run them. Important Support for Docker Container Engine as a container runtime is deprecated in Kubernetes 1.20 and will be removed in a future release. However, Docker-produced images will continue to work in your cluster with all runtimes, including CRI-O. For more information, see the Kubernetes blog announcement . When you ultimately run your containers in OpenShift Container Platform, you use the CRI-O container engine. CRI-O runs on every worker and control plane machine in an OpenShift Container Platform cluster, but CRI-O is not yet supported as a standalone runtime outside of OpenShift Container Platform. 7.2.2. Base image options The base image you choose to build your application on contains a set of software that resembles a Linux system to your application. When you build your own image, your software is placed into that file system and sees that file system as though it were looking at its operating system. Choosing this base image has major impact on how secure, efficient and upgradeable your container is in the future. Red Hat provides a new set of base images referred to as Red Hat Universal Base Images (UBI). These images are based on Red Hat Enterprise Linux and are similar to base images that Red Hat has offered in the past, with one major difference: they are freely redistributable without a Red Hat subscription. As a result, you can build your application on UBI images without having to worry about how they are shared or the need to create different images for different environments. These UBI images have standard, init, and minimal versions. You can also use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code. S2I images are available for you to use directly from the OpenShift Container Platform web UI. In the Developer perspective, navigate to the +Add view and in the Developer Catalog tile, view all of the available services in the Developer Catalog. Figure 7.2. Choose S2I base images for apps that need specific runtimes 7.2.3. Registry options Container registries are where you store container images so you can share them with others and make them available to the platform where they ultimately run. You can select large, public container registries that offer free accounts or a premium version that offer more storage and special features. You can also install your own registry that can be exclusive to your organization or selectively shared with others. To get Red Hat images and certified partner images, you can draw from the Red Hat Registry. The Red Hat Registry is represented by two locations: registry.access.redhat.com , which is unauthenticated and deprecated, and registry.redhat.io , which requires authentication. You can learn about the Red Hat and partner images in the Red Hat Registry from the Container images section of the Red Hat Ecosystem Catalog . Besides listing Red Hat container images, it also shows extensive information about the contents and quality of those images, including health scores that are based on applied security updates. Large, public registries include Docker Hub and Quay.io . The Quay.io registry is owned and managed by Red Hat. Many of the components used in OpenShift Container Platform are stored in Quay.io, including container images and the Operators that are used to deploy OpenShift Container Platform itself. Quay.io also offers the means of storing other types of content, including Helm charts. If you want your own, private container registry, OpenShift Container Platform itself includes a private container registry that is installed with OpenShift Container Platform and runs on its cluster. Red Hat also offers a private version of the Quay.io registry called Red Hat Quay . Red Hat Quay includes geo replication, Git build triggers, Clair image scanning, and many other features. All of the registries mentioned here can require credentials to download images from those registries. Some of those credentials are presented on a cluster-wide basis from OpenShift Container Platform, while other credentials can be assigned to individuals. 7.3. Creating a Kubernetes manifest for OpenShift Container Platform While the container image is the basic building block for a containerized application, more information is required to manage and deploy that application in a Kubernetes environment such as OpenShift Container Platform. The typical steps after you create an image are to: Understand the different resources you work with in Kubernetes manifests Make some decisions about what kind of an application you are running Gather supporting components Create a manifest and store that manifest in a Git repository so you can store it in a source versioning system, audit it, track it, promote and deploy it to the environment, roll it back to earlier versions, if necessary, and share it with others 7.3.1. About Kubernetes pods and services While the container image is the basic unit with docker, the basic units that Kubernetes works with are called pods . Pods represent the step in building out an application. A pod can contain one or more than one container. The key is that the pod is the single unit that you deploy, scale, and manage. Scalability and namespaces are probably the main items to consider when determining what goes in a pod. For ease of deployment, you might want to deploy a container in a pod and include its own logging and monitoring container in the pod. Later, when you run the pod and need to scale up an additional instance, those other containers are scaled up with it. For namespaces, containers in a pod share the same network interfaces, shared storage volumes, and resource limitations, such as memory and CPU, which makes it easier to manage the contents of the pod as a single unit. Containers in a pod can also communicate with each other by using standard inter-process communications, such as System V semaphores or POSIX shared memory. While individual pods represent a scalable unit in Kubernetes, a service provides a means of grouping together a set of pods to create a complete, stable application that can complete tasks such as load balancing. A service is also more permanent than a pod because the service remains available from the same IP address until you delete it. When the service is in use, it is requested by name and the OpenShift Container Platform cluster resolves that name into the IP addresses and ports where you can reach the pods that compose the service. By their nature, containerized applications are separated from the operating systems where they run and, by extension, their users. Part of your Kubernetes manifest describes how to expose the application to internal and external networks by defining network policies that allow fine-grained control over communication with your containerized applications. To connect incoming requests for HTTP, HTTPS, and other services from outside your cluster to services inside your cluster, you can use an Ingress resource. If your container requires on-disk storage instead of database storage, which might be provided through a service, you can add volumes to your manifests to make that storage available to your pods. You can configure the manifests to create persistent volumes (PVs) or dynamically create volumes that are added to your Pod definitions. After you define a group of pods that compose your application, you can define those pods in Deployment and DeploymentConfig objects. 7.3.2. Application types , consider how your application type influences how to run it. Kubernetes defines different types of workloads that are appropriate for different kinds of applications. To determine the appropriate workload for your application, consider if the application is: Meant to run to completion and be done. An example is an application that starts up to produce a report and exits when the report is complete. The application might not run again then for a month. Suitable OpenShift Container Platform objects for these types of applications include Job and CronJob objects. Expected to run continuously. For long-running applications, you can write a deployment . Required to be highly available. If your application requires high availability, then you want to size your deployment to have more than one instance. A Deployment or DeploymentConfig object can incorporate a replica set for that type of application. With replica sets, pods run across multiple nodes to make sure the application is always available, even if a worker goes down. Need to run on every node. Some types of Kubernetes applications are intended to run in the cluster itself on every master or worker node. DNS and monitoring applications are examples of applications that need to run continuously on every node. You can run this type of application as a daemon set . You can also run a daemon set on a subset of nodes, based on node labels. Require life-cycle management. When you want to hand off your application so that others can use it, consider creating an Operator . Operators let you build in intelligence, so it can handle things like backups and upgrades automatically. Coupled with the Operator Lifecycle Manager (OLM), cluster managers can expose Operators to selected namespaces so that users in the cluster can run them. Have identity or numbering requirements. An application might have identity requirements or numbering requirements. For example, you might be required to run exactly three instances of the application and to name the instances 0 , 1 , and 2 . A stateful set is suitable for this application. Stateful sets are most useful for applications that require independent storage, such as databases and zookeeper clusters. 7.3.3. Available supporting components The application you write might need supporting components, like a database or a logging component. To fulfill that need, you might be able to obtain the required component from the following Catalogs that are available in the OpenShift Container Platform web console: OperatorHub, which is available in each OpenShift Container Platform 4.18 cluster. The OperatorHub makes Operators available from Red Hat, certified Red Hat partners, and community members to the cluster operator. The cluster operator can make those Operators available in all or selected namespaces in the cluster, so developers can launch them and configure them with their applications. Templates, which are useful for a one-off type of application, where the lifecycle of a component is not important after it is installed. A template provides an easy way to get started developing a Kubernetes application with minimal overhead. A template can be a list of resource definitions, which could be Deployment , Service , Route , or other objects. If you want to change names or resources, you can set these values as parameters in the template. You can configure the supporting Operators and templates to the specific needs of your development team and then make them available in the namespaces in which your developers work. Many people add shared templates to the openshift namespace because it is accessible from all other namespaces. 7.3.4. Applying the manifest Kubernetes manifests let you create a more complete picture of the components that make up your Kubernetes applications. You write these manifests as YAML files and deploy them by applying them to the cluster, for example, by running the oc apply command. 7.3.5. steps At this point, consider ways to automate your container development process. Ideally, you have some sort of CI pipeline that builds the images and pushes them to a registry. In particular, a GitOps pipeline integrates your container development with the Git repositories that you use to store the software that is required to build your applications. The workflow to this point might look like: Day 1: You write some YAML. You then run the oc apply command to apply that YAML to the cluster and test that it works. Day 2: You put your YAML container configuration file into your own Git repository. From there, people who want to install that app, or help you improve it, can pull down the YAML and apply it to their cluster to run the app. Day 3: Consider writing an Operator for your application. 7.4. Develop for Operators Packaging and deploying your application as an Operator might be preferred if you make your application available for others to run. As noted earlier, Operators add a lifecycle component to your application that acknowledges that the job of running an application is not complete as soon as it is installed. When you create an application as an Operator, you can build in your own knowledge of how to run and maintain the application. You can build in features for upgrading the application, backing it up, scaling it, or keeping track of its state. If you configure the application correctly, maintenance tasks, like updating the Operator, can happen automatically and invisibly to the Operator's users. An example of a useful Operator is one that is set up to automatically back up data at particular times. Having an Operator manage an application's backup at set times can save a system administrator from remembering to do it. Any application maintenance that has traditionally been completed manually, like backing up data or rotating certificates, can be completed automatically with an Operator. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/architecture/understanding-development |
15.3.2. Connecting to a VNC Server | 15.3.2. Connecting to a VNC Server Once the VNC server is configured, you can connect to it from any VNC viewer. Procedure 15.6. Connecting to a VNC Server Using a GUI Enter the vncviewer command with no arguments, the VNC Viewer: Connection Details utility appears. It prompts for a VNC server to connect to. If required, to prevent disconnecting any existing VNC connections to the same display, select the option to allow sharing of the desktop as follows: Select the Options button. Select the Misc. tab. Select the Shared button. Press OK to return to the main menu. Enter an address and display number to connect to: address : display_number Press Connect to connect to the VNC server display. You will be prompted to enter the VNC password. This will be the VNC password for the user corresponding to the display number unless a global default VNC password was set. A window appears showing the VNC server desktop. Note that this is not the desktop the normal user sees, it is an Xvnc desktop. Procedure 15.7. Connecting to a VNC Server Using the CLI Enter the viewer command with the address and display number as arguments: vncviewer address : display_number Where address is an IP address or host name. Authenticate yourself by entering the VNC password. This will be the VNC password for the user corresponding to the display number unless a global default VNC password was set. A window appears showing the VNC server desktop. Note that this is not the desktop the normal user sees, it is the Xvnc desktop. 15.3.2.1. Configuring the Firewall for VNC When using a non-encrypted connection, the firewall might block your connection. The VNC protocol is remote framebuffer ( RFB ), which is transported in TCP packets. If required, open a port for the TCP protocol as described below. When using the -via option, traffic is redirected over SSH which is enabled by default. Note The default port of VNC server is 5900. To reach the port through which a remote desktop will be accessible, sum the default port and the user's assigned display number. For example, for the second display: 2 + 5900 = 5902. Procedure 15.8. Opening a Port Using lokkit The lokkit command provides a way to quickly enable a port using the command line. To enable a specific port, for example port 5902 for TCP , issue the following command as root : Note that this will restart the firewall as long as it has not been disabled with the --disabled option. Active connections will be terminated and time out on the initiating machine. Verify whether the chosen port is open. As root , enter: If you are unsure of the port numbers in use for VNC, as root , enter: Ports starting 59XX are for the VNC RFB protocol. Ports starting 60XX are for the X windows protocol. To list the ports and the Xvnc session's associated user, as root , enter: Procedure 15.9. Configuring the Firewall Using an Editor When preparing a configuration file for multiple installations using administration tools, it is useful to edit the firewall configuration file directly. Note that any mistakes in the configuration file could have unexpected consequences, cause an error, and prevent the firewall settings from being applied. Therefore, check the /etc/sysconfig/system-config-firewall file thoroughly after editing. To check what the firewall is configured to allow, issue the following command as root to view the firewall configuration file: In this example taken from a default installation, the firewall is enabled but VNC ports have not been configured to pass through. Open /etc/sysconfig/system-config-firewall for editing as root and add lines in the following format to the firewall configuration file: --port= port_number :tcp For example, to add port 5902 : Note that these changes will not take effect even if the firewall is reloaded or the system rebooted. To apply the settings in /etc/sysconfig/system-config-firewall , issue the following command as root : | [
"~]# lokkit --port=5902:tcp --update",
"~]# iptables -L -n | grep 'tcp.*59' ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5902",
"~]# netstat -tnlp tcp 0 0 0.0.0.0:6003 0.0.0.0:* LISTEN 4290/Xvnc tcp 0 0 0.0.0.0:5900 0.0.0.0:* LISTEN 7013/x0vncserver tcp 0 0 0.0.0.0:5902 0.0.0.0:* LISTEN 4189/Xvnc tcp 0 0 0.0.0.0:5903 0.0.0.0:* LISTEN 4290/Xvnc tcp 0 0 0.0.0.0:6002 0.0.0.0:* LISTEN 4189/Xvnc",
"~]# lsof -i -P | grep vnc Xvnc 4189 jane 0u IPv6 27972 0t0 TCP *:6002 (LISTEN) Xvnc 4189 jane 1u IPv4 27973 0t0 TCP *:6002 (LISTEN) Xvnc 4189 jane 6u IPv4 27979 0t0 TCP *:5902 (LISTEN) Xvnc 4290 joe 0u IPv6 28231 0t0 TCP *:6003 (LISTEN) Xvnc 4290 joe 1u IPv4 28232 0t0 TCP *:6003 (LISTEN) Xvnc 4290 joe 6u IPv4 28244 0t0 TCP *:5903 (LISTEN) x0vncserv 7013 joe 4u IPv4 47578 0t0 TCP *:5900 (LISTEN)",
"~]# less /etc/sysconfig/system-config-firewall Configuration file for system-config-firewall --enabled --service=ssh",
"~]# vi /etc/sysconfig/system-config-firewall # Configuration file for system-config-firewall --enabled --service=ssh --port=5902:tcp",
"~]# lokkit --update"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-connecting-to-vnc-server |
7.4. Configuration Examples | 7.4. Configuration Examples 7.4.1. Setting up CVS This example describes a simple CVS setup and an SELinux configuration which allows remote access. Two hosts are used in this example; a CVS server with a host name of cvs-srv with an IP address of 192.168.1.1 and a client with a host name of cvs-client and an IP address of 192.168.1.100 . Both hosts are on the same subnet (192.168.1.0/24). This is an example only and assumes that the cvs and xinetd packages are installed, that the SELinux targeted policy is used, and that SELinux is running in enforced mode. This example will show that even with full DAC permissions, SELinux can still enforce policy rules based on file labels and only allow access to certain areas that have been specifically labeled for access by CVS. Note Steps 1-9 should be performed on the CVS server, cvs-srv . This example requires the cvs and xinetd packages. Run the rpm -q cvs command to see if the cvs package is installed. If it is not installed, run the following command as the root user to install cvs : Run the rpm -q xinetd command to see if the xinetd package is installed. If it is not installed, run the following command as the root user to install xinetd : Create a group named CVS . This can be done via the groupadd CVS command as the root user, or by using the system-config-users tool. Create a user with a user name of cvsuser and make this user a member of the CVS group. This can be done using the system-config-users tool. Edit the /etc/services file and make sure that the CVS server has uncommented entries looking similar to the following: Create the CVS repository in the root area of the file system. When using SELinux, it is best to have the repository in the root file system so that recursive labels can be given to it without affecting any other subdirectories. For example, as the root user, create a /cvs/ directory to house the repository: Give full permissions to the /cvs/ directory to all users: Warning This is an example only and these permissions should not be used in a production system. Edit the /etc/xinetd.d/cvs file and make sure that the CVS section is uncommented and configured to use the /cvs/ directory. The file should look similar to: Start the xinetd daemon by running the service xinetd start command as the root user. Add a rule which allows inbound connections using TCP on port 2401 by using the system-config-firewall tool. As the cvsuser user, run the following command: At this point, CVS has been configured but SELinux will still deny logins and file access. To demonstrate this, set the USDCVSROOT variable on cvs-client and try to log in remotely. The following step should be performed on cvs-client : SELinux has blocked access. In order to get SELinux to allow this access, the following step should be performed on cvs-srv : Change the context of the /cvs/ directory as the root user in order to recursively label any existing and new data in the /cvs/ directory, giving it the cvs_data_t type: The client, cvs-client should now be able to log in and access all CVS resources in this repository: | [
"~]# yum install cvs",
"~]# yum install xinetd",
"cvspserver 2401/tcp # CVS client/server operations cvspserver 2401/udp # CVS client/server operations",
"mkdir /cvs",
"chmod -R 777 /cvs",
"service cvspserver { disable = no port = 2401 socket_type = stream protocol = tcp wait = no user = root passenv = PATH server = /usr/bin/cvs env = HOME=/cvs server_args = -f --allow-root=/cvs pserver # bind = 127.0.0.1",
"[cvsuser@cvs-client]USD cvs -d /cvs init",
"[cvsuser@cvs-client]USD export CVSROOT=:pserver:[email protected]:/cvs [cvsuser@cvs-client]USD [cvsuser@cvs-client]USD cvs login Logging in to :pserver:[email protected]:2401/cvs CVS password: ******** cvs [login aborted]: unrecognized auth response from 192.168.100.1: cvs pserver: cannot open /cvs/CVSROOT/config: Permission denied",
"semanage fcontext -a -t cvs_data_t '/cvs(/.*)?' restorecon -R -v /cvs",
"[cvsuser@cvs-client]USD export CVSROOT=:pserver:[email protected]:/cvs [cvsuser@cvs-client]USD [cvsuser@cvs-client]USD cvs login Logging in to :pserver:[email protected]:2401/cvs CVS password: ******** [cvsuser@cvs-client]USD"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-concurrent_versioning_system-configuration_examples |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Add a reporter name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_tuning/proc-providing-feedback-on-redhat-documentation |
Chapter 5. Renewing and changing the SSL certificate | Chapter 5. Renewing and changing the SSL certificate If your current SSL certificate has expired or will expire soon, you can either renew or replace the SSL certificate used by Ansible Automation Platform. You must renew the SSL certificate if you need to regenerate the SSL certificate with new information such as new hosts. You must replace the SSL certificate if you want to use an SSL certificate signed by an internal certificate authority. 5.1. Renewing the self-signed SSL certificate The following steps regenerate a new SSL certificate for both automation controller and automation hub. Procedure Add aap_service_regen_cert=true to the inventory file in the [all:vars] section: [all:vars] aap_service_regen_cert=true Run the installer. Verification Validate the CA file and server.crt file on automation controller: openssl verify -CAfile ansible-automation-platform-managed-ca-cert.crt /etc/tower/tower.cert openssl s_client -connect <AUTOMATION_HUB_URL>:443 Validate the CA file and server.crt file on automation hub: openssl verify -CAfile ansible-automation-platform-managed-ca-cert.crt /etc/pulp/certs/pulp_webserver.crt openssl s_client -connect <AUTOMATION_CONTROLLER_URL>:443 5.2. Changing SSL certificates To change the SSL certificate, you can edit the inventory file and run the installation program. The installation program verifies that all Ansible Automation Platform components are working. The installation program can take a long time to run. Alternatively, you can change the SSL certificates manually. This is quicker, but there is no automatic verification. Red Hat recommends that you use the installation program to make changes to your Ansible Automation Platform instance. 5.2.1. Prerequisites If there is an intermediate certificate authority, you must append it to the server certificate. Both automation controller and automation hub use NGINX so the server certificate must be in PEM format. Use the correct order for the certificates: The server certificate comes first, followed by the intermediate certificate authority. For further information, see the ssl certificate section of the NGINX documentation . 5.2.2. Changing the SSL certificate and key using the installer The following procedure describes how to change the SSL certificate and key in the inventory file. Procedure Copy the new SSL certificates and keys to a path relative to the Ansible Automation Platform installer. Add the absolute paths of the SSL certificates and keys to the inventory file. Refer to the Automation controller variables , Automation hub variables , and Event-Driven Ansible controller variables sections of RPM installation for guidance on setting these variables. Automation controller: web_server_ssl_cert , web_server_ssl_key , custom_ca_cert Automation hub: automationhub_ssl_cert , automationhub_ssl_key , custom_ca_cert Event-Driven Ansible controller: automationedacontroller_ssl_cert , automationedacontroller_ssl_key , custom_ca_cert Note The custom_ca_cert must be the root certificate authority that signed the intermediate certificate authority. This file is installed in /etc/pki/ca-trust/source/anchors . Run the installation program. 5.2.2.1. Changing the SSL certificate and key manually on automation controller The following procedure describes how to change the SSL certificate and key manually on automation controller. Procedure Backup the current SSL certificate: cp /etc/tower/tower.cert /etc/tower/tower.cert-USD(date +%F) Backup the current key files: cp /etc/tower/tower.key /etc/tower/tower.key-USD(date +%F)+ Copy the new SSL certificate to /etc/tower/tower.cert . Copy the new key to /etc/tower/tower.key . Restore the SELinux context: restorecon -v /etc/tower/tower.cert /etc/tower/tower.key Set appropriate permissions for the certificate and key files: chown root:awx /etc/tower/tower.cert /etc/tower/tower.key chmod 0600 /etc/tower/tower.cert /etc/tower/tower.key Test the NGINX configuration: nginx -t Reload NGINX: systemctl reload nginx.service Verify that new SSL certificate and key have been installed: true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443 5.2.2.2. Changing the SSL certificate and key on automation controller on OpenShift Container Platform The following procedure describes how to change the SSL certificate and key for automation controller running on OpenShift Container Platform. Procedure Copy the signed SSL certificate and key to a secure location. Create a TLS secret within OpenShift: oc create secret tls USD{CONTROLLER_INSTANCE}-certs-USD(date +%F) --cert=/path/to/ssl.crt --key=/path/to/ssl.key Modify the automation controller custom resource to add route_tls_secret and the name of the new secret to the spec section. oc edit automationcontroller/USD{CONTROLLER_INSTANCE} ... spec: route_tls_secret: automation-controller-certs-2023-04-06 ... The name of the TLS secret is arbitrary. In this example, it is timestamped with the date that the secret is created, to differentiate it from other TLS secrets applied to the automation controller instance. Wait a few minutes for the changes to be applied. Verify that new SSL certificate and key have been installed: true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443 5.2.2.3. Changing the SSL certificate and key for automation hub on OpenShift Container Platform The following procedure describes how to change the SSL certificate and key for automation hub running on OpenShift Container Platform. Procedure Copy the signed SSL certificate and key to a secure location. Create a TLS secret within OpenShift: oc create secret tls USD{AUTOMATION_HUB_INSTANCE}-certs-USD(date +%F) --cert=/path/to/ssl.crt --key=/path/to/ssl.key Modify the automation hub custom resource to add route_tls_secret and the name of the new secret to the spec section. oc edit automationhub/USD{AUTOMATION_HUB_INSTANCE} ... spec: route_tls_secret: automation-hub-certs-2023-04-06 ... The name of the TLS secret is arbitrary. In this example, it is timestamped with the date that the secret is created, to differentiate it from other TLS secrets applied to the automation hub instance. Wait a few minutes for the changes to be applied. Verify that new SSL certificate and key have been installed: true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443 5.2.2.4. Changing the SSL certificate and key on Event-Driven Ansible controller The following procedure describes how to change the SSL certificate and key manually on Event-Driven Ansible controller. Procedure Backup the current SSL certificate: cp /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.cert-USD(date +%F) Backup the current key files: cp /etc/ansible-automation-platform/eda/server.key /etc/ansible-automation-platform/eda/server.key-USD(date +%F) Copy the new SSL certificate to /etc/ansible-automation-platform/eda/server.cert . Copy the new key to /etc/ansible-automation-platform/eda/server.key . Restore the SELinux context: restorecon -v /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.key Set appropriate permissions for the certificate and key files: chown root:eda /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.key chmod 0600 /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.key Test the NGINX configuration: nginx -t Reload NGINX: systemctl reload nginx.service Verify that new SSL certificate and key have been installed: true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443 5.2.2.5. Changing the SSL certificate and key manually on automation hub The following procedure describes how to change the SSL certificate and key manually on automation hub. Procedure Backup the current SSL certificate: cp /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.crt-USD(date +%F) Backup the current key files: cp /etc/pulp/certs/pulp_webserver.key /etc/pulp/certs/pulp_webserver.key-USD(date +%F) Copy the new SSL certificate to /etc/pulp/certs/pulp_webserver.crt . Copy the new key to /etc/pulp/certs/pulp_webserver.key . Restore the SELinux context: restorecon -v /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.key Set appropriate permissions for the certificate and key files: chown root:pulp /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.key chmod 0600 /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.key Test the NGINX configuration: nginx -t Reload NGINX: systemctl reload nginx.service Verify that new SSL certificate and key have been installed: true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443 | [
"[all:vars] aap_service_regen_cert=true",
"openssl verify -CAfile ansible-automation-platform-managed-ca-cert.crt /etc/tower/tower.cert openssl s_client -connect <AUTOMATION_HUB_URL>:443",
"openssl verify -CAfile ansible-automation-platform-managed-ca-cert.crt /etc/pulp/certs/pulp_webserver.crt openssl s_client -connect <AUTOMATION_CONTROLLER_URL>:443",
"cp /etc/tower/tower.cert /etc/tower/tower.cert-USD(date +%F)",
"cp /etc/tower/tower.key /etc/tower/tower.key-USD(date +%F)+",
"restorecon -v /etc/tower/tower.cert /etc/tower/tower.key",
"chown root:awx /etc/tower/tower.cert /etc/tower/tower.key chmod 0600 /etc/tower/tower.cert /etc/tower/tower.key",
"nginx -t",
"systemctl reload nginx.service",
"true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443",
"create secret tls USD{CONTROLLER_INSTANCE}-certs-USD(date +%F) --cert=/path/to/ssl.crt --key=/path/to/ssl.key",
"edit automationcontroller/USD{CONTROLLER_INSTANCE}",
"spec: route_tls_secret: automation-controller-certs-2023-04-06",
"true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443",
"create secret tls USD{AUTOMATION_HUB_INSTANCE}-certs-USD(date +%F) --cert=/path/to/ssl.crt --key=/path/to/ssl.key",
"edit automationhub/USD{AUTOMATION_HUB_INSTANCE}",
"spec: route_tls_secret: automation-hub-certs-2023-04-06",
"true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443",
"cp /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.cert-USD(date +%F)",
"cp /etc/ansible-automation-platform/eda/server.key /etc/ansible-automation-platform/eda/server.key-USD(date +%F)",
"restorecon -v /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.key",
"chown root:eda /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.key",
"chmod 0600 /etc/ansible-automation-platform/eda/server.cert /etc/ansible-automation-platform/eda/server.key",
"nginx -t",
"systemctl reload nginx.service",
"true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443",
"cp /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.crt-USD(date +%F)",
"cp /etc/pulp/certs/pulp_webserver.key /etc/pulp/certs/pulp_webserver.key-USD(date +%F)",
"restorecon -v /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.key",
"chown root:pulp /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.key",
"chmod 0600 /etc/pulp/certs/pulp_webserver.crt /etc/pulp/certs/pulp_webserver.key",
"nginx -t",
"systemctl reload nginx.service",
"true | openssl s_client -showcerts -connect USD{CONTROLLER_FQDN}:443"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/operating_ansible_automation_platform/changing-ssl-certs-keys |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/service_telemetry_framework_1.5/proc_providing-feedback-on-red-hat-documentation |
Chapter 3. Automation mesh design patterns | Chapter 3. Automation mesh design patterns The automation mesh topologies in this section provide examples you can use to design a mesh deployment in your environment. Examples range from a simple, hydrid node deployment to a complex pattern that deploys numerous automation controller instances, employing several execution and hop nodes. Prerequisites You reviewed conceptual information on node types and relationsips Note The following examples include images that illustrate the mesh topology. The arrows in the images indicate the direction of peering. After peering is established, the connection between the nodes allows bidirectional communication. 3.1. Multiple hybrid nodes inventory file example This example inventory file deploys a control plane consisting of multiple hybrid nodes. The nodes in the control plane are automatically peered to one another. [automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com The following image displays the topology of this mesh network. The default node_type for nodes in the control plane is hybrid . You can explicitly set the node_type of individual nodes to hybrid in the [automationcontroller group] : [automationcontroller] aap_c_1.example.com node_type=hybrid aap_c_2.example.com node_type=hybrid aap_c_3.example.com node_type=hybrid Alternatively, you can set the node_type of all nodes in the [automationcontroller] group. When you add new nodes to the control plane they are automatically set to hybrid nodes. [automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [automationcontroller:vars] node_type=hybrid If you think that you might add control nodes to your control plane in future, it is better to define a separate group for the hybrid nodes, and set the node_type for the group: [automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [hybrid_group] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [hybrid_group:vars] node_type=hybrid 3.2. Single node control plane with single execution node This example inventory file deploys a single-node control plane and establishes a peer relationship to an execution node. [automationcontroller] aap_c_1.example.com [automationcontroller:vars] node_type=control peers=execution_nodes [execution_nodes] aap_e_1.example.com The following image displays the topology of this mesh network. The [automationcontroller] stanza defines the control nodes. If you add a new node to the automationcontroller group, it will automatically peer with the aap_c_1.example.com node. The [automationcontroller:vars] stanza sets the node type to control for all nodes in the control plane and defines how the nodes peer to the execution nodes: If you add a new node to the execution_nodes group, the control plane nodes automatically peer to it. If you add a new node to the automationcontroller group, the node type is set to control . The [execution_nodes] stanza lists all the execution and hop nodes in the inventory. The default node type is execution . You can specify the node type for an individual node: [execution_nodes] aap_e_1.example.com node_type=execution Alternatively, you can set the node_type of all execution nodes in the [execution_nodes] group. When you add new nodes to the group, they are automatically set to execution nodes. [execution_nodes] aap_e_1.example.com [execution_nodes:vars] node_type=execution If you plan to add hop nodes to your inventory in future, it is better to define a separate group for the execution nodes, and set the node_type for the group: [execution_nodes] aap_e_1.example.com [local_execution_group] aap_e_1.example.com [local_execution_group:vars] node_type=execution 3.3. Minimum resilient configuration This example inventory file deploys a control plane consisting of two control nodes, and two execution nodes. All nodes in the control plane are automatically peered to one another. All nodes in the control plane are peered with all nodes in the execution_nodes group. This configuration is resilient because the execution nodes are reachable from all control nodes. The capacity algorithm determines which control node is chosen when a job is launched. Refer to Automation controller Capacity Determination and Job Impact in the Automation Controller User Guide for more information. The following inventory file defines this configuration. [automationcontroller] aap_c_1.example.com aap_c_2.example.com [automationcontroller:vars] node_type=control peers=execution_nodes [execution_nodes] aap_e_1.example.com aap_e_2.example.com The [automationcontroller] stanza defines the control nodes. All nodes in the control plane are peered to one another. If you add a new node to the automationcontroller group, it will automatically peer with the original nodes. The [automationcontroller:vars] stanza sets the node type to control for all nodes in the control plane and defines how the nodes peer to the execution nodes: If you add a new node to the execution_nodes group, the control plane nodes automatically peer to it. If you add a new node to the automationcontroller group, the node type is set to control . The following image displays the topology of this mesh network. 3.4. Segregated local and remote execution configuration This configuration adds a hop node and a remote execution node to the resilient configuration. The remote execution node is reachable from the hop node. You can use this setup if you are setting up execution nodes in a remote location, or if you need to run automation in a DMZ network. [automationcontroller] aap_c_1.example.com aap_c_2.example.com [automationcontroller:vars] node_type=control peers=instance_group_local [execution_nodes] aap_e_1.example.com aap_e_2.example.com aap_h_1.example.com node_type=hop aap_e_3.example.com [instance_group_local] aap_e_1.example.com aap_e_2.example.com [hop] aap_h_1.example.com [hop:vars] peers=automationcontroller [instance_group_remote] aap_e_3.example.com [instance_group_remote:vars] peers=hop The following image displays the topology of this mesh network. The [automationcontroller:vars] stanza sets the node types for all nodes in the control plane and defines how the control nodes peer to the local execution nodes: All nodes in the control plane are automatically peered to one another. All nodes in the control plane are peered with all local execution nodes. If the name of a group of nodes begins with instance_group_ , the installer recognises it as an instance group and adds it to the Ansible Automation Platform user interface. 3.5. Multi-hopped execution node In this configuration, resilient controller nodes are peered with resilient local execution nodes. Resilient local hop nodes are peered with the controller nodes. A remote execution node and a remote hop node are peered with the local hop nodes. You can use this setup if you need to run automation in a DMZ network from a remote network. [automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [automationcontroller:vars] node_type=control peers=instance_group_local [execution_nodes] aap_e_1.example.com aap_e_2.example.com aap_e_3.example.com aap_e_4.example.com aap_h_1.example.com node_type=hop aap_h_2.example.com node_type=hop aap_h_3.example.com node_type=hop [instance_group_local] aap_e_1.example.com aap_e_2.example.com [instance_group_remote] aap_e_3.example.com [instance_group_remote:vars] peers=local_hop [instance_group_multi_hop_remote] aap_e_4.example.com [instance_group_multi_hop_remote:vars] peers=remote_multi_hop [local_hop] aap_h_1.example.com aap_h_2.example.com [local_hop:vars] peers=automationcontroller [remote_multi_hop] aap_h_3 peers=local_hop The following image displays the topology of this mesh network. The [automationcontroller:vars] stanza sets the node types for all nodes in the control plane and defines how the control nodes peer to the local execution nodes: All nodes in the control plane are automatically peered to one another. All nodes in the control plane are peered with all local execution nodes. The [local_hop:vars] stanza peers all nodes in the [local_hop] group with all the control nodes. If the name of a group of nodes begins with instance_group_ , the installer recognises it as an instance group and adds it to the Ansible Automation Platform user interface. 3.6. Outbound only connections to controller nodes This example inventory file deploys a control plane consisting of two control nodes, and several execution nodes. Only outbound connections are allowed to the controller nodes All nodes in the 'execution_nodes' group are peered with all nodes in the controller plane. [automationcontroller] controller-[1:2].example.com [execution_nodes] execution-[1:5].example.com [execution_nodes:vars] # connection is established *from* the execution nodes *to* the automationcontroller peers=automationcontroller The following image displays the topology of this mesh network. | [
"[automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com",
"[automationcontroller] aap_c_1.example.com node_type=hybrid aap_c_2.example.com node_type=hybrid aap_c_3.example.com node_type=hybrid",
"[automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [automationcontroller:vars] node_type=hybrid",
"[automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [hybrid_group] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [hybrid_group:vars] node_type=hybrid",
"[automationcontroller] aap_c_1.example.com [automationcontroller:vars] node_type=control peers=execution_nodes [execution_nodes] aap_e_1.example.com",
"[execution_nodes] aap_e_1.example.com node_type=execution",
"[execution_nodes] aap_e_1.example.com [execution_nodes:vars] node_type=execution",
"[execution_nodes] aap_e_1.example.com [local_execution_group] aap_e_1.example.com [local_execution_group:vars] node_type=execution",
"[automationcontroller] aap_c_1.example.com aap_c_2.example.com [automationcontroller:vars] node_type=control peers=execution_nodes [execution_nodes] aap_e_1.example.com aap_e_2.example.com",
"[automationcontroller] aap_c_1.example.com aap_c_2.example.com [automationcontroller:vars] node_type=control peers=instance_group_local [execution_nodes] aap_e_1.example.com aap_e_2.example.com aap_h_1.example.com node_type=hop aap_e_3.example.com [instance_group_local] aap_e_1.example.com aap_e_2.example.com [hop] aap_h_1.example.com [hop:vars] peers=automationcontroller [instance_group_remote] aap_e_3.example.com [instance_group_remote:vars] peers=hop",
"[automationcontroller] aap_c_1.example.com aap_c_2.example.com aap_c_3.example.com [automationcontroller:vars] node_type=control peers=instance_group_local [execution_nodes] aap_e_1.example.com aap_e_2.example.com aap_e_3.example.com aap_e_4.example.com aap_h_1.example.com node_type=hop aap_h_2.example.com node_type=hop aap_h_3.example.com node_type=hop [instance_group_local] aap_e_1.example.com aap_e_2.example.com [instance_group_remote] aap_e_3.example.com [instance_group_remote:vars] peers=local_hop [instance_group_multi_hop_remote] aap_e_4.example.com [instance_group_multi_hop_remote:vars] peers=remote_multi_hop [local_hop] aap_h_1.example.com aap_h_2.example.com [local_hop:vars] peers=automationcontroller [remote_multi_hop] aap_h_3 peers=local_hop",
"[automationcontroller] controller-[1:2].example.com [execution_nodes] execution-[1:5].example.com connection is established *from* the execution nodes *to* the automationcontroller peers=automationcontroller"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_mesh_for_vm_environments/design-patterns |
Chapter 3. ClusterRole [authorization.openshift.io/v1] | Chapter 3. ClusterRole [authorization.openshift.io/v1] Description ClusterRole is a logical grouping of PolicyRules that can be referenced as a unit by ClusterRoleBindings. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required rules 3.1. Specification Property Type Description aggregationRule AggregationRule_v2 AggregationRule is an optional field that describes how to build the Rules for this ClusterRole. If AggregationRule is set, then the Rules are controller managed and direct changes to Rules will be stomped by the controller. apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta_v2 metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata rules array Rules holds all the PolicyRules for this ClusterRole rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 3.1.1. .rules Description Rules holds all the PolicyRules for this ClusterRole Type array 3.1.2. .rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs resources Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If this field is empty, then both kubernetes and origin API groups are assumed. That means that if an action is requested against one of the enumerated resources in either the kubernetes or the origin API group, the request will be allowed attributeRestrictions RawExtension AttributeRestrictions will vary depending on what the Authorizer/AuthorizationAttributeBuilder pair supports. If the Authorizer does not recognize how to handle the AttributeRestrictions, the Authorizer should report an error. nonResourceURLs array (string) NonResourceURLsSlice is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path This name is intentionally different than the internal type so that the DefaultConvert works nicely and because the ordering may be different. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. ResourceAll represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds and AttributeRestrictions contained in this rule. VerbAll represents all kinds. 3.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/clusterroles GET : list objects of kind ClusterRole POST : create a ClusterRole /apis/authorization.openshift.io/v1/clusterroles/{name} DELETE : delete a ClusterRole GET : read the specified ClusterRole PATCH : partially update the specified ClusterRole PUT : replace the specified ClusterRole 3.2.1. /apis/authorization.openshift.io/v1/clusterroles HTTP method GET Description list objects of kind ClusterRole Table 3.1. HTTP responses HTTP code Reponse body 200 - OK ClusterRoleList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterRole Table 3.2. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.3. Body parameters Parameter Type Description body ClusterRole schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 202 - Accepted ClusterRole schema 401 - Unauthorized Empty 3.2.2. /apis/authorization.openshift.io/v1/clusterroles/{name} Table 3.5. Global path parameters Parameter Type Description name string name of the ClusterRole HTTP method DELETE Description delete a ClusterRole Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.7. HTTP responses HTTP code Reponse body 200 - OK Status_v3 schema 202 - Accepted Status_v3 schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterRole Table 3.8. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterRole Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterRole Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.12. Body parameters Parameter Type Description body ClusterRole schema Table 3.13. HTTP responses HTTP code Reponse body 200 - OK ClusterRole schema 201 - Created ClusterRole schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/role_apis/clusterrole-authorization-openshift-io-v1 |
Chapter 2. Accessing the Fuse Console | Chapter 2. Accessing the Fuse Console To access the Fuse Console for Apache Karaf standalone, follow these steps. Prerequisite Install Fuse on the Karaf container. For step-by-step instructions, see Installing on Apache Karaf . Procedure In the command line, navigate to the directory in which you installed Red Hat Fuse and run the following command to start Fuse standalone: The Karaf console starts and shows version information, the default Fuse Console URL, and a list of common commands. In a browser, type the URL to connect to the Fuse Console. For example: http://localhost:8181/hawtio In the login page, type your user name and password and then click Log In . By default, the Fuse Console shows the Home page. The left navigation tabs indicate the running plugins. | [
"./bin/fuse"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_karaf_standalone/fuse-console-access-karaf |
Chapter 21. Device Tapset | Chapter 21. Device Tapset This set of functions is used to handle kernel and userspace device numbers. It contains the following functions: | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/dev-dot-stp |
2.5.3. VFS Tuning Options: Research and Experiment | 2.5.3. VFS Tuning Options: Research and Experiment Like all Linux file systems, GFS2 sits on top of a layer called the virtual file system (VFS). You can tune the VFS layer to improve underlying GFS2 performance by using the sysctl (8) command. For example, the values for dirty_background_ratio and vfs_cache_pressure may be adjusted depending on your situation. To fetch the current values, use the following commands: The following commands adjust the values: You can permanently change the values of these parameters by editing the /etc/sysctl.conf file. To find the optimal values for your use cases, research the various VFS options and experiment on a test cluster before deploying into full production. | [
"sysctl -n vm.dirty_background_ratio sysctl -n vm.vfs_cache_pressure",
"sysctl -w vm.dirty_background_ratio=20 sysctl -w vm.vfs_cache_pressure=500"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s2-vfstuning-gfs2 |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/proc_providing-feedback-on-red-hat-documentation_monitoring-and-managing-system-status-and-performance |
4.3.7. Activating and Deactivating Volume Groups | 4.3.7. Activating and Deactivating Volume Groups When you create a volume group it is, by default, activated. This means that the logical volumes in that group are accessible and subject to change. There are various circumstances for which you you need to make a volume group inactive and thus unknown to the kernel. To deactivate or activate a volume group, use the -a ( --available ) argument of the vgchange command. The following example deactivates the volume group my_volume_group . If clustered locking is enabled, add 'e' to activate or deactivate a volume group exclusively on one node or 'l' to activate or/deactivate a volume group only on the local node. Logical volumes with single-host snapshots are always activated exclusively because they can only be used on one node at once. You can deactivate individual logical volumes with the lvchange command, as described in Section 4.4.4, "Changing the Parameters of a Logical Volume Group" , For information on activating logical volumes on individual nodes in a cluster, see Section 4.8, "Activating Logical Volumes on Individual Nodes in a Cluster" . | [
"vgchange -a n my_volume_group"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/VG_activate |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/deduplicating_and_compressing_logical_volumes_on_rhel/proc_providing-feedback-on-red-hat-documentation_deduplicating-and-compressing-logical-volumes-on-rhel |
Chapter 4. Develop MicroProfile applications for JBoss EAP | Chapter 4. Develop MicroProfile applications for JBoss EAP To get started with developing applications that use MicroProfile APIs, create a Maven project and define the required dependencies. Use the JBoss EAP MicroProfile Bill of Materials (BOM) to control the versions of runtime Maven dependencies in the application Project Object Model (POM). After you create a Maven project, refer to the JBoss EAP XP Quickstarts for information about developing applications for specific MicroProfile APIs. For more information, see JBoss EAP XP Quickstarts . 4.1. Creating a Maven project with maven-archetype-webapp Use the maven-archetype-webapp archetype to create a Maven project for building applications for JBoss EAP deployment. Maven provides different archetypes for creating projects based on templates specific to project types. The maven-archetype-webapp creates a project with the structure required to develop simple web-applications. Prerequisites You have installed Maven. For more information, see Downloading Apache Maven . Procedure Set up a Maven project by using the mvn command. The command creates the directory structure for the project and the pom.xml configuration file. 1 groupID uniquely identifies the project. 2 artifactID is the name for the generated jar archive. 3 archetypeGroupID is the unique ID for maven-archetype-webapp . 4 archetypeArtifactId is the artifact ID for maven-archetype-webapp . 5 InteractiveMode instructs Maven to use the supplied parameters rather than starting in interactive mode. Navigate to the generated directory. Open the generated pom.xml configuration file in a text editor. Remove the content inside the <project> section of the pom.xml configuration file after the <name>helloworld Maven Webapp</name> line. Ensure that the file looks like this: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId> USD{group_id} </groupId> <artifactId> USD{artifact_id} </artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name> USD{artifact_id} Maven Webapp</name> </project> The content was removed because it is not required for the application. steps Defining properties in a Maven project . 4.2. Defining properties in a Maven project You can define properties in a Maven pom.xml configuration file as place holders for values. Define the value for JBoss EAP XP server as a property to use the value consistently in the configuration. Prerequisites You have initialized a Maven project. For more information, see Creating a Maven project with maven-archetype-webapp . Procedure Define a property <version.bom.microprofile> as the JBoss EAP XP version on which you will deploy the configured application. <project> ... <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <version.bom.microprofile>5.0.0.GA-redhat-00009</version.bom.microprofile> </properties> </project> steps Defining the repositories in a Maven project . 4.3. Defining the repositories in a Maven project Define the artifact and plug-in repositories in which Maven looks for artifacts and plug-ins to download. Prerequisites You have initialized a Maven project. For more information, see Creating a Maven project with maven-archetype-webapp . Procedure Define the artifacts repository. <project> ... <repositories> <repository> 1 <id>jboss-public-maven-repository</id> <name>JBoss Public Maven Repository</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </snapshots> <layout>default</layout> </repository> <repository> 2 <id>redhat-ga-maven-repository</id> <name>Red Hat GA Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </snapshots> <layout>default</layout> </repository> </repositories> </project> 1 The Red Hat GA Maven repository provides all the productized JBoss EAP and other Red Hat artifacts. 2 The JBoss Public Maven Repository provides artifacts such as WildFly Maven plug-ins Define the plug-ins repository. <project> ... <pluginRepositories> <pluginRepository> <id>jboss-public-maven-repository</id> <name>JBoss Public Maven Repository</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ga-maven-repository</id> <name>Red Hat GA Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> </pluginRepositories> </project> steps Importing the JBoss EAP MicroProfile BOM as dependency management in a Maven project . 4.4. Importing the JBoss EAP MicroProfile BOM as dependency management in a Maven project Import the JBoss EAP MicroProfile Bill of Materials (BOM) to control the versions of runtime Maven dependencies. When you specify a BOM in the <dependencyManagement> section, you do not need to individually specify the versions of the Maven dependencies defined in the provided scope. Prerequisites You have initialized a Maven project. For more information, see Creating a Maven project with maven-archetype-webapp . Procedure Add a property for the BOM version in the properties section of the pom.xml configuration file. <properties> ... <version.bom.microprofile>5.0.0.GA-redhat-00009</version.bom.microprofile> </properties> The value defined in the property <version.bom.microprofile> is used as the value for the BOM version. Import the JBoss EAP BOMs dependency management. <project> ... <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> 1 <artifactId>jboss-eap-xp-microprofile</artifactId> 2 <version>USD{version.bom.microprofile}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project> 1 groupID of the JBoss EAP-provided BOM. 2 artifactID of the JBoss EAP-provided BOM that provides supported JBoss EAP MicroProfile APIs. Optionally, you can import the JBoss EAP EE with Tools Bill to your project. For more information, see Importing the JBoss EAP BOMs as dependency management in a Maven project . steps Adding plug-in management in a Maven project 4.5. Importing the JBoss EAP BOMs as dependency management in a Maven project You can optionally import the JBoss EAP EE With Tools Bill of materials (BOM). The JBoss EAP BOM provides supported JBoss EAP Java EE APIs plus additional JBoss EAP API JARs and client BOMs. You only need to import this BOM if your application requires Jakarta EE APIs in addition to the Microprofile APIs. Prerequisites You have initialized a Maven project. For more information, see Creating a Maven project with maven-archetype-webapp . Procedure Add a property for the BOM version in the properties section of the pom.xml configuration file. <properties> .... <version.bom.ee>8.0.0.GA-redhat-00009</version.bom.ee> </properties> Import the JBoss EAP BOMs dependency management. <project> ... <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> 1 <artifactId>jboss-eap-ee-with-tools</artifactId> 2 <version>USD{version.bom.ee}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project> 1 groupID of the JBoss EAP-provided BOM. 2 artifactID of the JBoss EAP-provided BOM that provides supported JBoss EAP Java EE APIs plus additional JBoss EAP API JARs and client BOMs, and development tools such as Arquillian. steps Adding plug-in management in a Maven project 4.6. Adding plug-in management in a Maven project Add Maven plug-in management section to the pom.xml configuration file to get plug-ins required for Maven CLI commands. Prerequisites You have initialized a Maven project. For more information, see Creating a Maven project with maven-archetype-webapp . Procedure Define the versions for wildfly-maven-plugin and maven-war-plugin , in the <properties> section. <properties> ... <version.plugin.wildfly>4.2.1.Final</version.plugin.wildfly> <version.plugin.war>3.3.2</version.plugin.war> </properties> Add <pluginManagement> in <build> section inside the <project> section. <project> ... <build> <pluginManagement> <plugins> <plugin> 1 <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>USD{version.plugin.wildfly}</version> </plugin> <plugin> 2 <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>USD{version.plugin.war}</version> </plugin> </plugins> </pluginManagement> </build> </project> 1 You can use the wildfly-maven-plugin to deploy an application to JBoss EAP using the wildfly:deploy command. 2 You need to manage the war plugin version to ensure compatibility with JDK17+. steps Verifying a maven project 4.7. Verifying a maven project Verify that the Maven project you configured builds. Prerequisites You have defined Maven properties. For more information, see Defining properties in a Maven project . You have defined Maven repositories. For more information, see Defining the repositories in a Maven project . You have imported the JBoss EAP Bill of materials (BOMs) as dependency management. For more information, see Importing the JBoss EAP MicroProfile BOM as dependency management in a Maven project . You have added plug-in management. For more information, see Adding plugin management in Maven project for a server hello world application . Procedure Install the Maven dependencies added in the pom.xml locally. You get an output similar to the following: For more information about developing applications for specific MicroProfile APIs, see JBoss EAP XP Quickstarts . Additional resources The bootable JAR | [
"mvn archetype:generate -DgroupId= <group_id> \\ 1 -DartifactId= <artifact_id> \\ 2 -DarchetypeGroupId=org.apache.maven.archetypes \\ 3 -DarchetypeArtifactId=maven-archetype-webapp \\ 4 -DinteractiveMode=false 5",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId> USD{group_id} </groupId> <artifactId> USD{artifact_id} </artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <name> USD{artifact_id} Maven Webapp</name> </project>",
"<project> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <version.bom.microprofile>5.0.0.GA-redhat-00009</version.bom.microprofile> </properties> </project>",
"<project> <repositories> <repository> 1 <id>jboss-public-maven-repository</id> <name>JBoss Public Maven Repository</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </snapshots> <layout>default</layout> </repository> <repository> 2 <id>redhat-ga-maven-repository</id> <name>Red Hat GA Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </snapshots> <layout>default</layout> </repository> </repositories> </project>",
"<project> <pluginRepositories> <pluginRepository> <id>jboss-public-maven-repository</id> <name>JBoss Public Maven Repository</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>redhat-ga-maven-repository</id> <name>Red Hat GA Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> </pluginRepositories> </project>",
"<properties> <version.bom.microprofile>5.0.0.GA-redhat-00009</version.bom.microprofile> </properties>",
"<project> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> 1 <artifactId>jboss-eap-xp-microprofile</artifactId> 2 <version>USD{version.bom.microprofile}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project>",
"<properties> . <version.bom.ee>8.0.0.GA-redhat-00009</version.bom.ee> </properties>",
"<project> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> 1 <artifactId>jboss-eap-ee-with-tools</artifactId> 2 <version>USD{version.bom.ee}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project>",
"<properties> <version.plugin.wildfly>4.2.1.Final</version.plugin.wildfly> <version.plugin.war>3.3.2</version.plugin.war> </properties>",
"<project> <build> <pluginManagement> <plugins> <plugin> 1 <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>USD{version.plugin.wildfly}</version> </plugin> <plugin> 2 <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>USD{version.plugin.war}</version> </plugin> </plugins> </pluginManagement> </build> </project>",
"mvn package",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_xp_5.0/develop-microprofile-applications-for-server_default |
Creating customized images by using Insights image builder | Creating customized images by using Insights image builder Red Hat Enterprise Linux 8 Creating customized system images with Insights image builder and uploading them to cloud environments Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/creating_customized_images_by_using_insights_image_builder/index |
Chapter 5. Configuring the hostname | Chapter 5. Configuring the hostname 5.1. Server Endpoints Red Hat build of Keycloak exposes different endpoints to talk with applications as well as to allow accessing the administration console. These endpoints can be categorized into three main groups: Frontend Backend Administration Console The base URL for each group has an important impact on how tokens are issued and validated, on how links are created for actions that require the user to be redirected to Red Hat build of Keycloak (for example, when resetting password through email links), and, most importantly, how applications will discover these endpoints when fetching the OpenID Connect Discovery Document from realms/{realm-name}/.well-known/openid-configuration . 5.1.1. Frontend The frontend endpoints are those accessible through a public domain and usually related to authentication/authorization flows that happen through the front-channel. For instance, when an SPA wants to authenticate their users it redirects them to the authorization_endpoint so that users can authenticate using their browsers through the front-channel. By default, when the hostname settings are not set, the base URL for these endpoints is based on the incoming request so that the HTTP scheme, host, port, and path, are the same from the request. The default behavior also has a direct impact on how the server is going to issue tokens given that the issuer is also based on the URL set to the frontend endpoints. If the hostname settings are not set, the token issuer will also be based on the incoming request and also lack consistency if the client is requesting tokens using different URLs. When deploying to production you usually want a consistent URL for the frontend endpoints and the token issuer regardless of how the request is constructed. In order to achieve this consistency, you can set either the hostname or the hostname-url options. Most of the time, it should be enough to set the hostname option in order to change only the host of the frontend URLs: bin/kc.[sh|bat] start --hostname=<host> When using the hostname option the server is going to resolve the HTTP scheme, port, and path, automatically so that: https scheme is used unless you set hostname-strict-https=false if the proxy-headers option is set, the proxy will use the default ports (i.e.: 80 and 443). If the proxy uses a different port, it needs to be specified via the hostname-url configuration option However, if you want to set not only the host but also a scheme, port, and path, you can set the hostname-url option: bin/kc.[sh|bat] start --hostname-url=<scheme>://<host>:<port>/<path> This option gives you more flexibility as you can set the different parts of the URL from a single option. Note that the hostname and hostname-url are mutually exclusive. Note By hostname and proxy-headers configuration options you affect only the static resources URLs, redirect URIs, OIDC well-known endpoints, etc. In order to change, where/on which port the server actually listens on, you need to use the http/tls configuration options (e.g. http-host , https-port , etc.). For more details, see Configuring TLS and All configuration . 5.1.2. Backend The backend endpoints are those accessible through a public domain or through a private network. They are used for a direct communication between the server and clients without any intermediary but plain HTTP requests. For instance, after the user is authenticated an SPA wants to exchange the code sent by the server with a set of tokens by sending a token request to token_endpoint . By default, the URLs for backend endpoints are also based on the incoming request. To override this behavior, set the hostname-strict-backchannel configuration option by entering this command: bin/kc.[sh|bat] start --hostname=<value> --hostname-strict-backchannel=true By setting the hostname-strict-backchannel option, the URLs for the backend endpoints are going to be exactly the same as the frontend endpoints. When all applications connected to Red Hat build of Keycloak communicate through the public URL, set hostname-strict-backchannel to true . Otherwise, leave this parameter as false to allow client-server communication through a private network. 5.1.3. Administration Console The server exposes the administration console and static resources using a specific URL. By default, the URLs for the administration console are also based on the incoming request. However, you can set a specific host or base URL if you want to restrict access to the administration console using a specific URL. Similarly to how you set the frontend URLs, you can use the hostname-admin and hostname-admin-url options to achieve that. Note that if HTTPS is enabled ( http-enabled configuration option is set to false, which is the default setting for the production mode), the Red Hat build of Keycloak server automatically assumes you want to use HTTPS URLs. The admin console then tries to contact Red Hat build of Keycloak over HTTPS and HTTPS URLs are also used for its configured redirect/web origin URLs. It is not recommended for production, but you can use HTTP URL as hostname-admin-url to override this behaviour. Most of the time, it should be enough to set the hostname-admin option in order to change only the host of the administration console URLs: bin/kc.[sh|bat] start --hostname-admin=<host> However, if you want to set not only the host but also a scheme, port, and path, you can set the hostname-admin-url option: bin/kc.[sh|bat] start --hostname-admin-url=<scheme>://<host>:<port>/<path> Note that the hostname-admin and hostname-admin-url are mutually exclusive. To reduce attack surface, the administration endpoints for Red Hat build of Keycloak and the Admin Console should not be publicly accessible. Therefore, you can secure them by using a reverse proxy. For more information about which paths to expose using a reverse proxy, see Using a reverse proxy . 5.2. Example Scenarios The following are more example scenarios and the corresponding commands for setting up a hostname. Note that the start command requires setting up TLS. The corresponding options are not shown for example purposes. For more details, see Configuring TLS . 5.2.1. Exposing the server behind a TLS termination proxy In this example, the server is running behind a TLS termination proxy and publicly available from https://mykeycloak . Configuration: bin/kc.[sh|bat] start --hostname=mykeycloak --http-enabled=true --proxy-headers=forwarded|xforwarded 5.2.2. Exposing the server without a proxy In this example, the server is running without a proxy and exposed using a URL using HTTPS. Red Hat build of Keycloak configuration: bin/kc.[sh|bat] start --hostname-url=https://mykeycloak It is highly recommended using a TLS termination proxy in front of the server for security and availability reasons. For more details, see Using a reverse proxy . 5.2.3. Forcing backend endpoints to use the same URL the server is exposed In this example, backend endpoints are exposed using the same URL used by the server so that clients always fetch the same URL regardless of the origin of the request. Red Hat build of Keycloak configuration: bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-strict-backchannel=true 5.2.4. Exposing the server using a port other than the default ports In this example, the server is accessible using a port other than the default ports. Red Hat build of Keycloak configuration: bin/kc.[sh|bat] start --hostname-url=https://mykeycloak:8989 5.2.5. Exposing Red Hat build of Keycloak behind a TLS reencrypt proxy using different ports In this example, the server is running behind a proxy and both the server and the proxy are using their own certificates, so the communication between Red Hat build of Keycloak and the proxy is encrypted. The reverse proxy uses the Forwarded header and does not set the X-Forwarded-* headers. We need to keep in mind that the proxy configuration options (as well as hostname configuration options) are not changing the ports on which the server actually is listening on (it changes only the ports of static resources like JavaScript and CSS links, OIDC well-known endpoints, redirect URIs, etc.). Therefore, we need to use HTTP configuration options to change the Red Hat build of Keycloak server to internally listen on a different port, e.g. 8543. The proxy will be listening on the port 8443 (the port visible while accessing the console via a browser). The example hostname my-keycloak.org will be used for the server and similarly the admin console will be accessible via the admin.my-keycloak.org subdomain. Red Hat build of Keycloak configuration: bin/kc.[sh|bat] start --proxy-headers=forwarded --https-port=8543 --hostname-url=https://my-keycloak.org:8443 --hostname-admin-url=https://admin.my-keycloak.org:8443 Warning Usage of the proxy-headers option rely on Forwarded and X-Forwarded-* headers, respectively, that have to be set and overwritten by the reverse proxy. Misconfiguration may leave Red Hat build of Keycloak exposed to security issues. For more details, see Using a reverse proxy . 5.3. Troubleshooting To troubleshoot the hostname configuration, you can use a dedicated debug tool which can be enabled as: Red Hat build of Keycloak configuration: bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-debug=true Then after Red Hat build of Keycloak started properly, open your browser and go to: http://mykeycloak:8080/realms/<your-realm>/hostname-debug 5.4. Relevant options Table 5.1. By default, this endpoint is disabled (--hostname-debug=false) Value hostname Hostname for the Keycloak server. CLI: --hostname Env: KC_HOSTNAME hostname-admin The hostname for accessing the administration console. Use this option if you are exposing the administration console using a hostname other than the value set to the hostname option. CLI: --hostname-admin Env: KC_HOSTNAME_ADMIN hostname-admin-url Set the base URL for accessing the administration console, including scheme, host, port and path CLI: --hostname-admin-url Env: KC_HOSTNAME_ADMIN_URL hostname-debug Toggle the hostname debug page that is accessible at /realms/master/hostname-debug CLI: --hostname-debug Env: KC_HOSTNAME_DEBUG true , false (default) hostname-path This should be set if proxy uses a different context-path for Keycloak. CLI: --hostname-path Env: KC_HOSTNAME_PATH hostname-port The port used by the proxy when exposing the hostname. Set this option if the proxy uses a port other than the default HTTP and HTTPS ports. CLI: --hostname-port Env: KC_HOSTNAME_PORT -1 (default) hostname-strict Disables dynamically resolving the hostname from request headers. Should always be set to true in production, unless proxy verifies the Host header. CLI: --hostname-strict Env: KC_HOSTNAME_STRICT true (default), false hostname-strict-backchannel By default backchannel URLs are dynamically resolved from request headers to allow internal and external applications. If all applications use the public URL this option should be enabled. CLI: --hostname-strict-backchannel Env: KC_HOSTNAME_STRICT_BACKCHANNEL true , false (default) hostname-url Set the base URL for frontend URLs, including scheme, host, port and path. CLI: --hostname-url Env: KC_HOSTNAME_URL proxy The proxy address forwarding mode if the server is behind a reverse proxy. CLI: --proxy Env: KC_PROXY DEPRECATED. Use: proxy-headers . none (default), edge , reencrypt , passthrough | [
"bin/kc.[sh|bat] start --hostname=<host>",
"bin/kc.[sh|bat] start --hostname-url=<scheme>://<host>:<port>/<path>",
"bin/kc.[sh|bat] start --hostname=<value> --hostname-strict-backchannel=true",
"bin/kc.[sh|bat] start --hostname-admin=<host>",
"bin/kc.[sh|bat] start --hostname-admin-url=<scheme>://<host>:<port>/<path>",
"bin/kc.[sh|bat] start --hostname=mykeycloak --http-enabled=true --proxy-headers=forwarded|xforwarded",
"bin/kc.[sh|bat] start --hostname-url=https://mykeycloak",
"bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-strict-backchannel=true",
"bin/kc.[sh|bat] start --hostname-url=https://mykeycloak:8989",
"bin/kc.[sh|bat] start --proxy-headers=forwarded --https-port=8543 --hostname-url=https://my-keycloak.org:8443 --hostname-admin-url=https://admin.my-keycloak.org:8443",
"bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-debug=true"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_guide/hostname- |
Chapter 8. Prometheus [monitoring.coreos.com/v1] | Chapter 8. Prometheus [monitoring.coreos.com/v1] Description The Prometheus custom resource definition (CRD) defines a desired [Prometheus]( https://prometheus.io/docs/prometheus ) setup to run in a Kubernetes cluster. It allows to specify many options such as the number of replicas, persistent storage, and Alertmanagers where firing alerts should be sent and many more. For each Prometheus resource, the Operator deploys one or several StatefulSet objects in the same namespace. The number of StatefulSets is equal to the number of shards which is 1 by default. The resource defines via label and namespace selectors which ServiceMonitor , PodMonitor , Probe and PrometheusRule objects should be associated to the deployed Prometheus instances. The Operator continuously reconciles the scrape and rules configuration and a sidecar container running in the Prometheus pods triggers a reload of the configuration when needed. Type object Required spec 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the Prometheus cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status object Most recent observed status of the Prometheus cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 8.1.1. .spec Description Specification of the desired behavior of the Prometheus cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description additionalAlertManagerConfigs object AdditionalAlertManagerConfigs specifies a key of a Secret containing additional Prometheus Alertmanager configurations. The Alertmanager configurations are appended to the configuration generated by the Prometheus Operator. They must be formatted according to the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alertmanager_config The user is responsible for making sure that the configurations are valid Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible AlertManager configs are going to break Prometheus after the upgrade. additionalAlertRelabelConfigs object AdditionalAlertRelabelConfigs specifies a key of a Secret containing additional Prometheus alert relabel configurations. The alert relabel configurations are appended to the configuration generated by the Prometheus Operator. They must be formatted according to the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs The user is responsible for making sure that the configurations are valid Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible alert relabel configs are going to break Prometheus after the upgrade. additionalArgs array AdditionalArgs allows setting additional arguments for the 'prometheus' container. It is intended for e.g. activating hidden flags which are not supported by the dedicated configuration options yet. The arguments are passed as-is to the Prometheus container which may cause issues if they are invalid or not supported by the given Prometheus version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged. additionalArgs[] object Argument as part of the AdditionalArgs list. additionalScrapeConfigs object AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Prometheus scrape configurations. Scrape configurations specified are appended to the configurations generated by the Prometheus Operator. Job configurations specified must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config . As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible scrape configs are going to break Prometheus after the upgrade. affinity object Defines the Pods' affinity scheduling rules if specified. alerting object Defines the settings related to Alertmanager. allowOverlappingBlocks boolean AllowOverlappingBlocks enables vertical compaction and vertical query merge in Prometheus. Deprecated: this flag has no effect for Prometheus >= 2.39.0 where overlapping blocks are enabled by default. apiserverConfig object APIServerConfig allows specifying a host and auth methods to access the Kuberntees API server. If null, Prometheus is assumed to run inside of the cluster: it will discover the API servers automatically and use the Pod's CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/. arbitraryFSAccessThroughSMs object When true, ServiceMonitor, PodMonitor and Probe object are forbidden to reference arbitrary files on the file system of the 'prometheus' container. When a ServiceMonitor's endpoint specifies a bearerTokenFile value (e.g. '/var/run/secrets/kubernetes.io/serviceaccount/token'), a malicious target can get access to the Prometheus service account's token in the Prometheus' scrape request. Setting spec.arbitraryFSAccessThroughSM to 'true' would prevent the attack. Users should instead provide the credentials using the spec.bearerTokenSecret field. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in the pod. If the field isn't set, the operator mounts the service account token by default. Warning: be aware that by default, Prometheus requires the service account token for Kubernetes service discovery. It is possible to use strategic merge patch to project the service account token into the 'prometheus' container. baseImage string Deprecated: use 'spec.image' instead. bodySizeLimit string BodySizeLimit defines per-scrape on response body size. Only valid in Prometheus versions 2.45.0 and newer. Note that the global limit only applies to scrape objects that don't specify an explicit limit value. If you want to enforce a maximum limit for all scrape objects, refer to enforcedBodySizeLimit. configMaps array (string) ConfigMaps is a list of ConfigMaps in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods. Each ConfigMap is added to the StatefulSet definition as a volume named configmap-<configmap-name> . The ConfigMaps are mounted into /etc/prometheus/configmaps/<configmap-name> in the 'prometheus' container. containers array Containers allows injecting additional containers or modifying operator generated containers. This can be used to allow adding an authentication proxy to the Pods or to change the behavior of an operator generated container. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The names of containers managed by the operator are: * prometheus * config-reloader * thanos-sidecar Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. containers[] object A single application container that you want to run within a pod. disableCompaction boolean When true, the Prometheus compaction is disabled. dnsConfig object Defines the DNS configuration for the pods. dnsPolicy string Defines the DNS policy for the pods. enableAdminAPI boolean Enables access to the Prometheus web admin API. WARNING: Enabling the admin APIs enables mutating endpoints, to delete data, shutdown Prometheus, and more. Enabling this should be done with care and the user is advised to add additional authentication authorization via a proxy to ensure only clients authorized to perform these actions can do so. For more information: https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-admin-apis enableFeatures array (string) Enable access to Prometheus feature flags. By default, no features are enabled. Enabling features which are disabled by default is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. For more information see https://prometheus.io/docs/prometheus/latest/feature_flags/ enableRemoteWriteReceiver boolean Enable Prometheus to be used as a receiver for the Prometheus remote write protocol. WARNING: This is not considered an efficient way of ingesting samples. Use it with caution for specific low-volume use cases. It is not suitable for replacing the ingestion via scraping and turning Prometheus into a push-based metrics collection system. For more information see https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver It requires Prometheus >= v2.33.0. enforcedBodySizeLimit string When defined, enforcedBodySizeLimit specifies a global limit on the size of uncompressed response body that will be accepted by Prometheus. Targets responding with a body larger than this many bytes will cause the scrape to fail. It requires Prometheus >= v2.28.0. When both enforcedBodySizeLimit and bodySizeLimit are defined and greater than zero, the following rules apply: * Scrape objects without a defined bodySizeLimit value will inherit the global bodySizeLimit value (Prometheus >= 2.45.0) or the enforcedBodySizeLimit value (Prometheus < v2.45.0). If Prometheus version is >= 2.45.0 and the enforcedBodySizeLimit is greater than the bodySizeLimit , the bodySizeLimit will be set to enforcedBodySizeLimit . * Scrape objects with a bodySizeLimit value less than or equal to enforcedBodySizeLimit keep their specific value. * Scrape objects with a bodySizeLimit value greater than enforcedBodySizeLimit are set to enforcedBodySizeLimit. enforcedKeepDroppedTargets integer When defined, enforcedKeepDroppedTargets specifies a global limit on the number of targets dropped by relabeling that will be kept in memory. The value overrides any spec.keepDroppedTargets set by ServiceMonitor, PodMonitor, Probe objects unless spec.keepDroppedTargets is greater than zero and less than spec.enforcedKeepDroppedTargets . It requires Prometheus >= v2.47.0. When both enforcedKeepDroppedTargets and keepDroppedTargets are defined and greater than zero, the following rules apply: * Scrape objects without a defined keepDroppedTargets value will inherit the global keepDroppedTargets value (Prometheus >= 2.45.0) or the enforcedKeepDroppedTargets value (Prometheus < v2.45.0). If Prometheus version is >= 2.45.0 and the enforcedKeepDroppedTargets is greater than the keepDroppedTargets , the keepDroppedTargets will be set to enforcedKeepDroppedTargets . * Scrape objects with a keepDroppedTargets value less than or equal to enforcedKeepDroppedTargets keep their specific value. * Scrape objects with a keepDroppedTargets value greater than enforcedKeepDroppedTargets are set to enforcedKeepDroppedTargets. enforcedLabelLimit integer When defined, enforcedLabelLimit specifies a global limit on the number of labels per sample. The value overrides any spec.labelLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.labelLimit is greater than zero and less than spec.enforcedLabelLimit . It requires Prometheus >= v2.27.0. When both enforcedLabelLimit and labelLimit are defined and greater than zero, the following rules apply: * Scrape objects without a defined labelLimit value will inherit the global labelLimit value (Prometheus >= 2.45.0) or the enforcedLabelLimit value (Prometheus < v2.45.0). If Prometheus version is >= 2.45.0 and the enforcedLabelLimit is greater than the labelLimit , the labelLimit will be set to enforcedLabelLimit . * Scrape objects with a labelLimit value less than or equal to enforcedLabelLimit keep their specific value. * Scrape objects with a labelLimit value greater than enforcedLabelLimit are set to enforcedLabelLimit. enforcedLabelNameLengthLimit integer When defined, enforcedLabelNameLengthLimit specifies a global limit on the length of labels name per sample. The value overrides any spec.labelNameLengthLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.labelNameLengthLimit is greater than zero and less than spec.enforcedLabelNameLengthLimit . It requires Prometheus >= v2.27.0. When both enforcedLabelNameLengthLimit and labelNameLengthLimit are defined and greater than zero, the following rules apply: * Scrape objects without a defined labelNameLengthLimit value will inherit the global labelNameLengthLimit value (Prometheus >= 2.45.0) or the enforcedLabelNameLengthLimit value (Prometheus < v2.45.0). If Prometheus version is >= 2.45.0 and the enforcedLabelNameLengthLimit is greater than the labelNameLengthLimit , the labelNameLengthLimit will be set to enforcedLabelNameLengthLimit . * Scrape objects with a labelNameLengthLimit value less than or equal to enforcedLabelNameLengthLimit keep their specific value. * Scrape objects with a labelNameLengthLimit value greater than enforcedLabelNameLengthLimit are set to enforcedLabelNameLengthLimit. enforcedLabelValueLengthLimit integer When not null, enforcedLabelValueLengthLimit defines a global limit on the length of labels value per sample. The value overrides any spec.labelValueLengthLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.labelValueLengthLimit is greater than zero and less than spec.enforcedLabelValueLengthLimit . It requires Prometheus >= v2.27.0. When both enforcedLabelValueLengthLimit and labelValueLengthLimit are defined and greater than zero, the following rules apply: * Scrape objects without a defined labelValueLengthLimit value will inherit the global labelValueLengthLimit value (Prometheus >= 2.45.0) or the enforcedLabelValueLengthLimit value (Prometheus < v2.45.0). If Prometheus version is >= 2.45.0 and the enforcedLabelValueLengthLimit is greater than the labelValueLengthLimit , the labelValueLengthLimit will be set to enforcedLabelValueLengthLimit . * Scrape objects with a labelValueLengthLimit value less than or equal to enforcedLabelValueLengthLimit keep their specific value. * Scrape objects with a labelValueLengthLimit value greater than enforcedLabelValueLengthLimit are set to enforcedLabelValueLengthLimit. enforcedNamespaceLabel string When not empty, a label will be added to: 1. All metrics scraped from ServiceMonitor , PodMonitor , Probe and ScrapeConfig objects. 2. All metrics generated from recording rules defined in PrometheusRule objects. 3. All alerts generated from alerting rules defined in PrometheusRule objects. 4. All vector selectors of PromQL expressions defined in PrometheusRule objects. The label will not added for objects referenced in spec.excludedFromEnforcement . The label's name is this field's value. The label's value is the namespace of the ServiceMonitor , PodMonitor , Probe , PrometheusRule or ScrapeConfig object. enforcedSampleLimit integer When defined, enforcedSampleLimit specifies a global limit on the number of scraped samples that will be accepted. This overrides any spec.sampleLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.sampleLimit is greater than zero and less than spec.enforcedSampleLimit . It is meant to be used by admins to keep the overall number of samples/series under a desired limit. When both enforcedSampleLimit and sampleLimit are defined and greater than zero, the following rules apply: * Scrape objects without a defined sampleLimit value will inherit the global sampleLimit value (Prometheus >= 2.45.0) or the enforcedSampleLimit value (Prometheus < v2.45.0). If Prometheus version is >= 2.45.0 and the enforcedSampleLimit is greater than the sampleLimit , the sampleLimit will be set to enforcedSampleLimit . * Scrape objects with a sampleLimit value less than or equal to enforcedSampleLimit keep their specific value. * Scrape objects with a sampleLimit value greater than enforcedSampleLimit are set to enforcedSampleLimit. enforcedTargetLimit integer When defined, enforcedTargetLimit specifies a global limit on the number of scraped targets. The value overrides any spec.targetLimit set by ServiceMonitor, PodMonitor, Probe objects unless spec.targetLimit is greater than zero and less than spec.enforcedTargetLimit . It is meant to be used by admins to to keep the overall number of targets under a desired limit. When both enforcedTargetLimit and targetLimit are defined and greater than zero, the following rules apply: * Scrape objects without a defined targetLimit value will inherit the global targetLimit value (Prometheus >= 2.45.0) or the enforcedTargetLimit value (Prometheus < v2.45.0). If Prometheus version is >= 2.45.0 and the enforcedTargetLimit is greater than the targetLimit , the targetLimit will be set to enforcedTargetLimit . * Scrape objects with a targetLimit value less than or equal to enforcedTargetLimit keep their specific value. * Scrape objects with a targetLimit value greater than enforcedTargetLimit are set to enforcedTargetLimit. evaluationInterval string Interval between rule evaluations. Default: "30s" excludedFromEnforcement array List of references to PodMonitor, ServiceMonitor, Probe and PrometheusRule objects to be excluded from enforcing a namespace label of origin. It is only applicable if spec.enforcedNamespaceLabel set to true. excludedFromEnforcement[] object ObjectReference references a PodMonitor, ServiceMonitor, Probe or PrometheusRule object. exemplars object Exemplars related settings that are runtime reloadable. It requires to enable the exemplar-storage feature flag to be effective. externalLabels object (string) The labels to add to any time series or alerts when communicating with external systems (federation, remote storage, Alertmanager). Labels defined by spec.replicaExternalLabelName and spec.prometheusExternalLabelName take precedence over this list. externalUrl string The external URL under which the Prometheus service is externally available. This is necessary to generate correct URLs (for instance if Prometheus is accessible behind an Ingress resource). hostAliases array Optional list of hosts and IPs that will be injected into the Pod's hosts file if specified. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostNetwork boolean Use the host's network namespace if true. Make sure to understand the security implications if you want to enable it ( https://kubernetes.io/docs/concepts/configuration/overview/ ). When hostNetwork is enabled, this will set the DNS policy to ClusterFirstWithHostNet automatically (unless .spec.DNSPolicy is set to a different value). ignoreNamespaceSelectors boolean When true, spec.namespaceSelector from all PodMonitor, ServiceMonitor and Probe objects will be ignored. They will only discover targets within the namespace of the PodMonitor, ServiceMonitor and Probe object. image string Container image name for Prometheus. If specified, it takes precedence over the spec.baseImage , spec.tag and spec.sha fields. Specifying spec.version is still necessary to ensure the Prometheus Operator knows which version of Prometheus is being configured. If neither spec.image nor spec.baseImage are defined, the operator will use the latest upstream version of Prometheus available at the time when the operator was released. imagePullPolicy string Image pull policy for the 'prometheus', 'init-config-reloader' and 'config-reloader' containers. See https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy for more details. imagePullSecrets array An optional list of references to Secrets in the same namespace to use for pulling images from registries. See http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array InitContainers allows injecting initContainers to the Pod definition. Those can be used to e.g. fetch secrets for injection into the Prometheus configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The names of init container name managed by the operator are: * init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. initContainers[] object A single application container that you want to run within a pod. keepDroppedTargets integer Per-scrape limit on the number of targets dropped by relabeling that will be kept in memory. 0 means no limit. It requires Prometheus >= v2.47.0. Note that the global limit only applies to scrape objects that don't specify an explicit limit value. If you want to enforce a maximum limit for all scrape objects, refer to enforcedKeepDroppedTargets. labelLimit integer Per-scrape limit on number of labels that will be accepted for a sample. Only valid in Prometheus versions 2.45.0 and newer. Note that the global limit only applies to scrape objects that don't specify an explicit limit value. If you want to enforce a maximum limit for all scrape objects, refer to enforcedLabelLimit. labelNameLengthLimit integer Per-scrape limit on length of labels name that will be accepted for a sample. Only valid in Prometheus versions 2.45.0 and newer. Note that the global limit only applies to scrape objects that don't specify an explicit limit value. If you want to enforce a maximum limit for all scrape objects, refer to enforcedLabelNameLengthLimit. labelValueLengthLimit integer Per-scrape limit on length of labels value that will be accepted for a sample. Only valid in Prometheus versions 2.45.0 and newer. Note that the global limit only applies to scrape objects that don't specify an explicit limit value. If you want to enforce a maximum limit for all scrape objects, refer to enforcedLabelValueLengthLimit. listenLocal boolean When true, the Prometheus server listens on the loopback address instead of the Pod IP's address. logFormat string Log format for Log level for Prometheus and the config-reloader sidecar. logLevel string Log level for Prometheus and the config-reloader sidecar. maximumStartupDurationSeconds integer Defines the maximum time that the prometheus container's startup probe will wait before being considered failed. The startup probe will return success after the WAL replay is complete. If set, the value should be greater than 60 (seconds). Otherwise it will be equal to 600 seconds (15 minutes). minReadySeconds integer Minimum number of seconds for which a newly created Pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field from kubernetes 1.22 until 1.24 which requires enabling the StatefulSetMinReadySeconds feature gate. nodeSelector object (string) Defines on which Nodes the Pods are scheduled. otlp object Settings related to the OTLP receiver feature. It requires Prometheus >= v2.55.0. overrideHonorLabels boolean When true, Prometheus resolves label conflicts by renaming the labels in the scraped data to "exported_" for all targets created from ServiceMonitor, PodMonitor and ScrapeConfig objects. Otherwise the HonorLabels field of the service or pod monitor applies. In practice, overrideHonorLaels:true enforces honorLabels:false for all ServiceMonitor, PodMonitor and ScrapeConfig objects. overrideHonorTimestamps boolean When true, Prometheus ignores the timestamps for all the targets created from service and pod monitors. Otherwise the HonorTimestamps field of the service or pod monitor applies. paused boolean When a Prometheus deployment is paused, no actions except for deletion will be performed on the underlying objects. persistentVolumeClaimRetentionPolicy object The field controls if and how PVCs are deleted during the lifecycle of a StatefulSet. The default behavior is all PVCs are retained. This is an alpha field from kubernetes 1.23 until 1.26 and a beta field from 1.26. It requires enabling the StatefulSetAutoDeletePVC feature gate. podMetadata object PodMetadata configures labels and annotations which are propagated to the Prometheus pods. The following items are reserved and cannot be overridden: * "prometheus" label, set to the name of the Prometheus object. * "app.kubernetes.io/instance" label, set to the name of the Prometheus object. * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/name" label, set to "prometheus". * "app.kubernetes.io/version" label, set to the Prometheus version. * "operator.prometheus.io/name" label, set to the name of the Prometheus object. * "operator.prometheus.io/shard" label, set to the shard number of the Prometheus object. * "kubectl.kubernetes.io/default-container" annotation, set to "prometheus". podMonitorNamespaceSelector object Namespaces to match for PodMonitors discovery. An empty label selector matches all namespaces. A null label selector (default value) matches the current namespace only. podMonitorSelector object PodMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. podTargetLabels array (string) PodTargetLabels are appended to the spec.podTargetLabels field of all PodMonitor and ServiceMonitor objects. portName string Port name used for the pods and governing service. Default: "web" priorityClassName string Priority class assigned to the Pods. probeNamespaceSelector object Namespaces to match for Probe discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. probeSelector object Probes to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. prometheusExternalLabelName string Name of Prometheus external label used to denote the Prometheus instance name. The external label will not be added when the field is set to the empty string ( "" ). Default: "prometheus" prometheusRulesExcludedFromEnforce array Defines the list of PrometheusRule objects to which the namespace label enforcement doesn't apply. This is only relevant when spec.enforcedNamespaceLabel is set to true. Deprecated: use spec.excludedFromEnforcement instead. prometheusRulesExcludedFromEnforce[] object PrometheusRuleExcludeConfig enables users to configure excluded PrometheusRule names and their namespaces to be ignored while enforcing namespace label for alerts and metrics. query object QuerySpec defines the configuration of the Promethus query service. queryLogFile string queryLogFile specifies where the file to which PromQL queries are logged. If the filename has an empty path, e.g. 'query.log', The Prometheus Pods will mount the file into an emptyDir volume at /var/log/prometheus . If a full path is provided, e.g. '/var/log/prometheus/query.log', you must mount a volume in the specified directory and it must be writable. This is because the prometheus container runs with a read-only root filesystem for security reasons. Alternatively, the location can be set to a standard I/O stream, e.g. /dev/stdout , to log query information to the default Prometheus log stream. reloadStrategy string Defines the strategy used to reload the Prometheus configuration. If not specified, the configuration is reloaded using the /-/reload HTTP endpoint. remoteRead array Defines the list of remote read configurations. remoteRead[] object RemoteReadSpec defines the configuration for Prometheus to read back samples from a remote endpoint. remoteWrite array Defines the list of remote write configurations. remoteWrite[] object RemoteWriteSpec defines the configuration to write samples from Prometheus to a remote endpoint. remoteWriteReceiverMessageVersions array (string) List of the protobuf message versions to accept when receiving the remote writes. It requires Prometheus >= v2.54.0. replicaExternalLabelName string Name of Prometheus external label used to denote the replica name. The external label will not be added when the field is set to the empty string ( "" ). Default: "prometheus_replica" replicas integer Number of replicas of each shard to deploy for a Prometheus deployment. spec.replicas multiplied by spec.shards is the total number of Pods created. Default: 1 resources object Defines the resources requests and limits of the 'prometheus' container. retention string How long to retain the Prometheus data. Default: "24h" if spec.retention and spec.retentionSize are empty. retentionSize string Maximum number of bytes used by the Prometheus data. routePrefix string The route prefix Prometheus registers HTTP handlers for. This is useful when using spec.externalURL , and a proxy is rewriting HTTP routes of a request, and the actual ExternalURL is still true, but the server serves requests under a different route prefix. For example for use with kubectl proxy . ruleNamespaceSelector object Namespaces to match for PrometheusRule discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. ruleQueryOffset string Defines the offset the rule evaluation timestamp of this particular group by the specified duration into the past. It requires Prometheus >= v2.53.0. ruleSelector object PrometheusRule objects to be selected for rule evaluation. An empty label selector matches all objects. A null label selector matches no objects. rules object Defines the configuration of the Prometheus rules' engine. runtime object RuntimeConfig configures the values for the Prometheus process behavior sampleLimit integer SampleLimit defines per-scrape limit on number of scraped samples that will be accepted. Only valid in Prometheus versions 2.45.0 and newer. Note that the global limit only applies to scrape objects that don't specify an explicit limit value. If you want to enforce a maximum limit for all scrape objects, refer to enforcedSampleLimit. scrapeClasses array List of scrape classes to expose to scraping objects such as PodMonitors, ServiceMonitors, Probes and ScrapeConfigs. This is an experimental feature , it may change in any upcoming release in a breaking way. scrapeClasses[] object scrapeConfigNamespaceSelector object Namespaces to match for ScrapeConfig discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. Note that the ScrapeConfig custom resource definition is currently at Alpha level. scrapeConfigSelector object ScrapeConfigs to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Note that the ScrapeConfig custom resource definition is currently at Alpha level. scrapeInterval string Interval between consecutive scrapes. Default: "30s" scrapeProtocols array (string) The protocols to negotiate during a scrape. It tells clients the protocols supported by Prometheus in order of preference (from most to least preferred). If unset, Prometheus uses its default value. It requires Prometheus >= v2.49.0. scrapeTimeout string Number of seconds to wait until a scrape request times out. secrets array (string) Secrets is a list of Secrets in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods. Each Secret is added to the StatefulSet definition as a volume named secret-<secret-name> . The Secrets are mounted into /etc/prometheus/secrets/<secret-name> in the 'prometheus' container. securityContext object SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run the Prometheus Pods. serviceDiscoveryRole string Defines the service discovery role used to discover targets from ServiceMonitor objects and Alertmanager endpoints. If set, the value should be either "Endpoints" or "EndpointSlice". If unset, the operator assumes the "Endpoints" role. serviceMonitorNamespaceSelector object Namespaces to match for ServicedMonitors discovery. An empty label selector matches all namespaces. A null label selector (default value) matches the current namespace only. serviceMonitorSelector object ServiceMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. sha string Deprecated: use 'spec.image' instead. The image's digest can be specified as part of the image name. shards integer Number of shards to distribute targets onto. spec.replicas multiplied by spec.shards is the total number of Pods created. Note that scaling down shards will not reshard data onto remaining instances, it must be manually moved. Increasing shards will not reshard data either but it will continue to be available from the same instances. To query globally, use Thanos sidecar and Thanos querier or remote write data to a central location. Sharding is performed on the content of the address target meta-label for PodMonitors and ServiceMonitors and param_target for Probes. Default: 1 storage object Storage defines the storage used by Prometheus. tag string Deprecated: use 'spec.image' instead. The image's tag can be specified as part of the image name. targetLimit integer TargetLimit defines a limit on the number of scraped targets that will be accepted. Only valid in Prometheus versions 2.45.0 and newer. Note that the global limit only applies to scrape objects that don't specify an explicit limit value. If you want to enforce a maximum limit for all scrape objects, refer to enforcedTargetLimit. thanos object Defines the configuration of the optional Thanos sidecar. tolerations array Defines the Pods' tolerations if specified. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array Defines the pod's topology spread constraints if specified. topologySpreadConstraints[] object tracingConfig object TracingConfig configures tracing in Prometheus. This is an experimental feature , it may change in any upcoming release in a breaking way. tsdb object Defines the runtime reloadable configuration of the timeseries database(TSDB). It requires Prometheus >= v2.39.0 or PrometheusAgent >= v2.54.0. version string Version of Prometheus being deployed. The operator uses this information to generate the Prometheus StatefulSet + configuration files. If not specified, the operator assumes the latest upstream version of Prometheus available at the time when the version of the operator was released. volumeMounts array VolumeMounts allows the configuration of additional VolumeMounts. VolumeMounts will be appended to other VolumeMounts in the 'prometheus' container, that are generated as a result of StorageSpec objects. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array Volumes allows the configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. walCompression boolean Configures compression of the write-ahead log (WAL) using Snappy. WAL compression is enabled by default for Prometheus >= 2.20.0 Requires Prometheus v2.11.0 and above. web object Defines the configuration of the Prometheus web server. 8.1.2. .spec.additionalAlertManagerConfigs Description AdditionalAlertManagerConfigs specifies a key of a Secret containing additional Prometheus Alertmanager configurations. The Alertmanager configurations are appended to the configuration generated by the Prometheus Operator. They must be formatted according to the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alertmanager_config The user is responsible for making sure that the configurations are valid Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible AlertManager configs are going to break Prometheus after the upgrade. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.3. .spec.additionalAlertRelabelConfigs Description AdditionalAlertRelabelConfigs specifies a key of a Secret containing additional Prometheus alert relabel configurations. The alert relabel configurations are appended to the configuration generated by the Prometheus Operator. They must be formatted according to the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs The user is responsible for making sure that the configurations are valid Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible alert relabel configs are going to break Prometheus after the upgrade. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.4. .spec.additionalArgs Description AdditionalArgs allows setting additional arguments for the 'prometheus' container. It is intended for e.g. activating hidden flags which are not supported by the dedicated configuration options yet. The arguments are passed as-is to the Prometheus container which may cause issues if they are invalid or not supported by the given Prometheus version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged. Type array 8.1.5. .spec.additionalArgs[] Description Argument as part of the AdditionalArgs list. Type object Required name Property Type Description name string Name of the argument, e.g. "scrape.discovery-reload-interval". value string Argument value, e.g. 30s. Can be empty for name-only arguments (e.g. --storage.tsdb.no-lockfile) 8.1.6. .spec.additionalScrapeConfigs Description AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Prometheus scrape configurations. Scrape configurations specified are appended to the configurations generated by the Prometheus Operator. Job configurations specified must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config . As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible scrape configs are going to break Prometheus after the upgrade. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.7. .spec.affinity Description Defines the Pods' affinity scheduling rules if specified. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 8.1.8. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 8.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 8.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 8.1.11. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 8.1.12. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 8.1.13. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 8.1.14. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 8.1.15. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 8.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 8.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 8.1.18. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 8.1.19. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 8.1.20. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 8.1.21. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 8.1.22. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 8.1.23. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 8.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 8.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 8.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 8.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.28. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.29. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.30. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.31. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.32. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 8.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 8.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.36. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.37. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.38. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.39. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.40. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.41. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 8.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 8.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 8.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 8.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.46. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.47. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.48. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.49. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.50. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 8.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key in (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both matchLabelKeys and labelSelector. Also, matchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). mismatchLabelKeys array (string) MismatchLabelKeys is a set of pod label keys to select which pods will be taken into consideration. The keys are used to lookup values from the incoming pod labels, those key-value labels are merged with labelSelector as key notin (value) to select the group of existing pods which pods will be taken into consideration for the incoming pod's pod (anti) affinity. Keys that don't exist in the incoming pod labels will be ignored. The default value is empty. The same key is forbidden to exist in both mismatchLabelKeys and labelSelector. Also, mismatchLabelKeys cannot be set when labelSelector isn't set. This is a beta field and requires enabling MatchLabelKeysInPodAffinity feature gate (enabled by default). namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 8.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. If it's null, this PodAffinityTerm matches with no Pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.54. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.55. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.56. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.57. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.58. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.59. .spec.alerting Description Defines the settings related to Alertmanager. Type object Required alertmanagers Property Type Description alertmanagers array Alertmanager endpoints where Prometheus should send alerts to. alertmanagers[] object AlertmanagerEndpoints defines a selection of a single Endpoints object containing Alertmanager IPs to fire alerts against. 8.1.60. .spec.alerting.alertmanagers Description Alertmanager endpoints where Prometheus should send alerts to. Type array 8.1.61. .spec.alerting.alertmanagers[] Description AlertmanagerEndpoints defines a selection of a single Endpoints object containing Alertmanager IPs to fire alerts against. Type object Required name port Property Type Description alertRelabelings array Relabeling configs applied before sending alerts to a specific Alertmanager. It requires Prometheus >= v2.51.0. alertRelabelings[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config apiVersion string Version of the Alertmanager API that Prometheus uses to send alerts. It can be "v1" or "v2". authorization object Authorization section for Alertmanager. Cannot be set at the same time as basicAuth , bearerTokenFile or sigv4 . basicAuth object BasicAuth configuration for Alertmanager. Cannot be set at the same time as bearerTokenFile , authorization or sigv4 . bearerTokenFile string File to read bearer token for Alertmanager. Cannot be set at the same time as basicAuth , authorization , or sigv4 . Deprecated: this will be removed in a future release. Prefer using authorization . enableHttp2 boolean Whether to enable HTTP2. name string Name of the Endpoints object in the namespace. namespace string Namespace of the Endpoints object. If not set, the object will be discovered in the namespace of the Prometheus object. pathPrefix string Prefix for the HTTP path alerts are pushed to. port integer-or-string Port on which the Alertmanager API is exposed. relabelings array Relabel configuration applied to the discovered Alertmanagers. relabelings[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config scheme string Scheme to use when firing alerts. sigv4 object Sigv4 allows to configures AWS's Signature Verification 4 for the URL. It requires Prometheus >= v2.48.0. Cannot be set at the same time as basicAuth , bearerTokenFile or authorization . timeout string Timeout is a per-target Alertmanager timeout when pushing alerts. tlsConfig object TLS Config to use for Alertmanager. 8.1.62. .spec.alerting.alertmanagers[].alertRelabelings Description Relabeling configs applied before sending alerts to a specific Alertmanager. It requires Prometheus >= v2.51.0. Type array 8.1.63. .spec.alerting.alertmanagers[].alertRelabelings[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 8.1.64. .spec.alerting.alertmanagers[].authorization Description Authorization section for Alertmanager. Cannot be set at the same time as basicAuth , bearerTokenFile or sigv4 . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 8.1.65. .spec.alerting.alertmanagers[].authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.66. .spec.alerting.alertmanagers[].basicAuth Description BasicAuth configuration for Alertmanager. Cannot be set at the same time as bearerTokenFile , authorization or sigv4 . Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 8.1.67. .spec.alerting.alertmanagers[].basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.68. .spec.alerting.alertmanagers[].basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.69. .spec.alerting.alertmanagers[].relabelings Description Relabel configuration applied to the discovered Alertmanagers. Type array 8.1.70. .spec.alerting.alertmanagers[].relabelings[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 8.1.71. .spec.alerting.alertmanagers[].sigv4 Description Sigv4 allows to configures AWS's Signature Verification 4 for the URL. It requires Prometheus >= v2.48.0. Cannot be set at the same time as basicAuth , bearerTokenFile or authorization . Type object Property Type Description accessKey object AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. profile string Profile is the named AWS profile used to authenticate. region string Region is the AWS region. If blank, the region from the default credentials chain used. roleArn string RoleArn is the named AWS profile used to authenticate. secretKey object SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. 8.1.72. .spec.alerting.alertmanagers[].sigv4.accessKey Description AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.73. .spec.alerting.alertmanagers[].sigv4.secretKey Description SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.74. .spec.alerting.alertmanagers[].tlsConfig Description TLS Config to use for Alertmanager. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 8.1.75. .spec.alerting.alertmanagers[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.76. .spec.alerting.alertmanagers[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.77. .spec.alerting.alertmanagers[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.78. .spec.alerting.alertmanagers[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.79. .spec.alerting.alertmanagers[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.80. .spec.alerting.alertmanagers[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.81. .spec.alerting.alertmanagers[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.82. .spec.apiserverConfig Description APIServerConfig allows specifying a host and auth methods to access the Kuberntees API server. If null, Prometheus is assumed to run inside of the cluster: it will discover the API servers automatically and use the Pod's CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/. Type object Required host Property Type Description authorization object Authorization section for the API server. Cannot be set at the same time as basicAuth , bearerToken , or bearerTokenFile . basicAuth object BasicAuth configuration for the API server. Cannot be set at the same time as authorization , bearerToken , or bearerTokenFile . bearerToken string Warning: this field shouldn't be used because the token value appears in clear-text. Prefer using authorization . Deprecated: this will be removed in a future release. bearerTokenFile string File to read bearer token for accessing apiserver. Cannot be set at the same time as basicAuth , authorization , or bearerToken . Deprecated: this will be removed in a future release. Prefer using authorization . host string Kubernetes API address consisting of a hostname or IP address followed by an optional port number. tlsConfig object TLS Config to use for the API server. 8.1.83. .spec.apiserverConfig.authorization Description Authorization section for the API server. Cannot be set at the same time as basicAuth , bearerToken , or bearerTokenFile . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. credentialsFile string File to read a secret from, mutually exclusive with credentials . type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 8.1.84. .spec.apiserverConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.85. .spec.apiserverConfig.basicAuth Description BasicAuth configuration for the API server. Cannot be set at the same time as authorization , bearerToken , or bearerTokenFile . Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 8.1.86. .spec.apiserverConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.87. .spec.apiserverConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.88. .spec.apiserverConfig.tlsConfig Description TLS Config to use for the API server. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 8.1.89. .spec.apiserverConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.90. .spec.apiserverConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.91. .spec.apiserverConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.92. .spec.apiserverConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.93. .spec.apiserverConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.94. .spec.apiserverConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.95. .spec.apiserverConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.96. .spec.arbitraryFSAccessThroughSMs Description When true, ServiceMonitor, PodMonitor and Probe object are forbidden to reference arbitrary files on the file system of the 'prometheus' container. When a ServiceMonitor's endpoint specifies a bearerTokenFile value (e.g. '/var/run/secrets/kubernetes.io/serviceaccount/token'), a malicious target can get access to the Prometheus service account's token in the Prometheus' scrape request. Setting spec.arbitraryFSAccessThroughSM to 'true' would prevent the attack. Users should instead provide the credentials using the spec.bearerTokenSecret field. Type object Property Type Description deny boolean 8.1.97. .spec.containers Description Containers allows injecting additional containers or modifying operator generated containers. This can be used to allow adding an authentication proxy to the Pods or to change the behavior of an operator generated container. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The names of containers managed by the operator are: * prometheus * config-reloader * thanos-sidecar Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 8.1.98. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 8.1.99. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 8.1.100. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 8.1.101. .spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 8.1.102. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.103. .spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 8.1.104. .spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 8.1.105. .spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.106. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 8.1.107. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 8.1.108. .spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 8.1.109. .spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 8.1.110. .spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 8.1.111. .spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 8.1.112. .spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.113. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.114. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.115. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.116. .spec.containers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 8.1.117. .spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.118. .spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 8.1.119. .spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.120. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.121. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.122. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.123. .spec.containers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 8.1.124. .spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.125. .spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.126. .spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.127. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.128. .spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.129. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.130. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.131. .spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.132. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 8.1.133. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 8.1.134. .spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.135. .spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.136. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.137. .spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.138. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.139. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.140. .spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.141. .spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 8.1.142. .spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 8.1.143. .spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.144. .spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.145. .spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. request string Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request. 8.1.146. .spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. appArmorProfile object appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default value is Default which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 8.1.147. .spec.containers[].securityContext.appArmorProfile Description appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. 8.1.148. .spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 8.1.149. .spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 8.1.150. .spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 8.1.151. .spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 8.1.152. .spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.153. .spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.154. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.155. .spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.156. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.157. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.158. .spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.159. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 8.1.160. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 8.1.161. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 8.1.162. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 8.1.163. .spec.dnsConfig Description Defines the DNS configuration for the pods. Type object Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. options array A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Resolution options given in Options will override those that appear in the base DNSPolicy. options[] object PodDNSConfigOption defines DNS resolver options of a pod. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. 8.1.164. .spec.dnsConfig.options Description A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Resolution options given in Options will override those that appear in the base DNSPolicy. Type array 8.1.165. .spec.dnsConfig.options[] Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Required name Property Type Description name string Name is required and must be unique. value string Value is optional. 8.1.166. .spec.excludedFromEnforcement Description List of references to PodMonitor, ServiceMonitor, Probe and PrometheusRule objects to be excluded from enforcing a namespace label of origin. It is only applicable if spec.enforcedNamespaceLabel set to true. Type array 8.1.167. .spec.excludedFromEnforcement[] Description ObjectReference references a PodMonitor, ServiceMonitor, Probe or PrometheusRule object. Type object Required namespace resource Property Type Description group string Group of the referent. When not specified, it defaults to monitoring.coreos.com name string Name of the referent. When not set, all resources in the namespace are matched. namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resource string Resource of the referent. 8.1.168. .spec.exemplars Description Exemplars related settings that are runtime reloadable. It requires to enable the exemplar-storage feature flag to be effective. Type object Property Type Description maxSize integer Maximum number of exemplars stored in memory for all series. exemplar-storage itself must be enabled using the spec.enableFeature option for exemplars to be scraped in the first place. If not set, Prometheus uses its default value. A value of zero or less than zero disables the storage. 8.1.169. .spec.hostAliases Description Optional list of hosts and IPs that will be injected into the Pod's hosts file if specified. Type array 8.1.170. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Required hostnames ip Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 8.1.171. .spec.imagePullSecrets Description An optional list of references to Secrets in the same namespace to use for pulling images from registries. See http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod Type array 8.1.172. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 8.1.173. .spec.initContainers Description InitContainers allows injecting initContainers to the Pod definition. Those can be used to e.g. fetch secrets for injection into the Prometheus configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The names of init container name managed by the operator are: * init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 8.1.174. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 8.1.175. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 8.1.176. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 8.1.177. .spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 8.1.178. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.179. .spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 8.1.180. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 8.1.181. .spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.182. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 8.1.183. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 8.1.184. .spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap must be defined 8.1.185. .spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret must be defined 8.1.186. .spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 8.1.187. .spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 8.1.188. .spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.189. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.190. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.191. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.192. .spec.initContainers[].lifecycle.postStart.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 8.1.193. .spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.194. .spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. sleep object Sleep represents the duration that the container should sleep before being terminated. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 8.1.195. .spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.196. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.197. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.198. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.199. .spec.initContainers[].lifecycle.preStop.sleep Description Sleep represents the duration that the container should sleep before being terminated. Type object Required seconds Property Type Description seconds integer Seconds is the number of seconds to sleep. 8.1.200. .spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.201. .spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.202. .spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.203. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.204. .spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.205. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.206. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.207. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.208. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 8.1.209. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 8.1.210. .spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.211. .spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.212. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.213. .spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.214. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.215. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.216. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.217. .spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 8.1.218. .spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 8.1.219. .spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.220. .spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.221. .spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. request string Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request. 8.1.222. .spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. appArmorProfile object appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default value is Default which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 8.1.223. .spec.initContainers[].securityContext.appArmorProfile Description appArmorProfile is the AppArmor options to use by this container. If set, this profile overrides the pod's appArmorProfile. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. 8.1.224. .spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 8.1.225. .spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 8.1.226. .spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 8.1.227. .spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 8.1.228. .spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 8.1.229. .spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 8.1.230. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 8.1.231. .spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 8.1.232. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 8.1.233. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 8.1.234. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 8.1.235. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 8.1.236. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 8.1.237. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 8.1.238. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 8.1.239. .spec.otlp Description Settings related to the OTLP receiver feature. It requires Prometheus >= v2.55.0. Type object Property Type Description promoteResourceAttributes array (string) List of OpenTelemetry Attributes that should be promoted to metric labels, defaults to none. 8.1.240. .spec.persistentVolumeClaimRetentionPolicy Description The field controls if and how PVCs are deleted during the lifecycle of a StatefulSet. The default behavior is all PVCs are retained. This is an alpha field from kubernetes 1.23 until 1.26 and a beta field from 1.26. It requires enabling the StatefulSetAutoDeletePVC feature gate. Type object Property Type Description whenDeleted string WhenDeleted specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is deleted. The default policy of Retain causes PVCs to not be affected by StatefulSet deletion. The Delete policy causes those PVCs to be deleted. whenScaled string WhenScaled specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is scaled down. The default policy of Retain causes PVCs to not be affected by a scaledown. The Delete policy causes the associated PVCs for any excess pods above the replica count to be deleted. 8.1.241. .spec.podMetadata Description PodMetadata configures labels and annotations which are propagated to the Prometheus pods. The following items are reserved and cannot be overridden: * "prometheus" label, set to the name of the Prometheus object. * "app.kubernetes.io/instance" label, set to the name of the Prometheus object. * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/name" label, set to "prometheus". * "app.kubernetes.io/version" label, set to the Prometheus version. * "operator.prometheus.io/name" label, set to the name of the Prometheus object. * "operator.prometheus.io/shard" label, set to the shard number of the Prometheus object. * "kubectl.kubernetes.io/default-container" annotation, set to "prometheus". Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 8.1.242. .spec.podMonitorNamespaceSelector Description Namespaces to match for PodMonitors discovery. An empty label selector matches all namespaces. A null label selector (default value) matches the current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.243. .spec.podMonitorNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.244. .spec.podMonitorNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.245. .spec.podMonitorSelector Description PodMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.246. .spec.podMonitorSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.247. .spec.podMonitorSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.248. .spec.probeNamespaceSelector Description Namespaces to match for Probe discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.249. .spec.probeNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.250. .spec.probeNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.251. .spec.probeSelector Description Probes to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.252. .spec.probeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.253. .spec.probeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.254. .spec.prometheusRulesExcludedFromEnforce Description Defines the list of PrometheusRule objects to which the namespace label enforcement doesn't apply. This is only relevant when spec.enforcedNamespaceLabel is set to true. Deprecated: use spec.excludedFromEnforcement instead. Type array 8.1.255. .spec.prometheusRulesExcludedFromEnforce[] Description PrometheusRuleExcludeConfig enables users to configure excluded PrometheusRule names and their namespaces to be ignored while enforcing namespace label for alerts and metrics. Type object Required ruleName ruleNamespace Property Type Description ruleName string Name of the excluded PrometheusRule object. ruleNamespace string Namespace of the excluded PrometheusRule object. 8.1.256. .spec.query Description QuerySpec defines the configuration of the Promethus query service. Type object Property Type Description lookbackDelta string The delta difference allowed for retrieving metrics during expression evaluations. maxConcurrency integer Number of concurrent queries that can be run at once. maxSamples integer Maximum number of samples a single query can load into memory. Note that queries will fail if they would load more samples than this into memory, so this also limits the number of samples a query can return. timeout string Maximum time a query may take before being aborted. 8.1.257. .spec.remoteRead Description Defines the list of remote read configurations. Type array 8.1.258. .spec.remoteRead[] Description RemoteReadSpec defines the configuration for Prometheus to read back samples from a remote endpoint. Type object Required url Property Type Description authorization object Authorization section for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as basicAuth , or oauth2 . basicAuth object BasicAuth configuration for the URL. Cannot be set at the same time as authorization , or oauth2 . bearerToken string Warning: this field shouldn't be used because the token value appears in clear-text. Prefer using authorization . Deprecated: this will be removed in a future release. bearerTokenFile string File from which to read the bearer token for the URL. Deprecated: this will be removed in a future release. Prefer using authorization . filterExternalLabels boolean Whether to use the external labels as selectors for the remote read endpoint. It requires Prometheus >= v2.34.0. followRedirects boolean Configure whether HTTP requests follow HTTP 3xx redirects. It requires Prometheus >= v2.26.0. headers object (string) Custom HTTP headers to be sent along with each remote read request. Be aware that headers that are set by Prometheus itself can't be overwritten. Only valid in Prometheus versions 2.26.0 and newer. name string The name of the remote read queue, it must be unique if specified. The name is used in metrics and logging in order to differentiate read configurations. It requires Prometheus >= v2.15.0. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 configuration for the URL. It requires Prometheus >= v2.27.0. Cannot be set at the same time as authorization , or basicAuth . proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. readRecent boolean Whether reads should be made for queries for time ranges that the local storage should have complete data for. remoteTimeout string Timeout for requests to the remote read endpoint. requiredMatchers object (string) An optional list of equality matchers which have to be present in a selector to query the remote read endpoint. tlsConfig object TLS Config to use for the URL. url string The URL of the endpoint to query from. 8.1.259. .spec.remoteRead[].authorization Description Authorization section for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as basicAuth , or oauth2 . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. credentialsFile string File to read a secret from, mutually exclusive with credentials . type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 8.1.260. .spec.remoteRead[].authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.261. .spec.remoteRead[].basicAuth Description BasicAuth configuration for the URL. Cannot be set at the same time as authorization , or oauth2 . Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 8.1.262. .spec.remoteRead[].basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.263. .spec.remoteRead[].basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.264. .spec.remoteRead[].oauth2 Description OAuth2 configuration for the URL. It requires Prometheus >= v2.27.0. Cannot be set at the same time as authorization , or basicAuth . Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 8.1.265. .spec.remoteRead[].oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.266. .spec.remoteRead[].oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.267. .spec.remoteRead[].oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.268. .spec.remoteRead[].oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.269. .spec.remoteRead[].oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 8.1.270. .spec.remoteRead[].oauth2.proxyConnectHeader{} Description Type array 8.1.271. .spec.remoteRead[].oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.272. .spec.remoteRead[].oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 8.1.273. .spec.remoteRead[].oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.274. .spec.remoteRead[].oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.275. .spec.remoteRead[].oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.276. .spec.remoteRead[].oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.277. .spec.remoteRead[].oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.278. .spec.remoteRead[].oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.279. .spec.remoteRead[].oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.280. .spec.remoteRead[].proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 8.1.281. .spec.remoteRead[].proxyConnectHeader{} Description Type array 8.1.282. .spec.remoteRead[].proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.283. .spec.remoteRead[].tlsConfig Description TLS Config to use for the URL. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 8.1.284. .spec.remoteRead[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.285. .spec.remoteRead[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.286. .spec.remoteRead[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.287. .spec.remoteRead[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.288. .spec.remoteRead[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.289. .spec.remoteRead[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.290. .spec.remoteRead[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.291. .spec.remoteWrite Description Defines the list of remote write configurations. Type array 8.1.292. .spec.remoteWrite[] Description RemoteWriteSpec defines the configuration to write samples from Prometheus to a remote endpoint. Type object Required url Property Type Description authorization object Authorization section for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as sigv4 , basicAuth , oauth2 , or azureAd . azureAd object AzureAD for the URL. It requires Prometheus >= v2.45.0. Cannot be set at the same time as authorization , basicAuth , oauth2 , or sigv4 . basicAuth object BasicAuth configuration for the URL. Cannot be set at the same time as sigv4 , authorization , oauth2 , or azureAd . bearerToken string Warning: this field shouldn't be used because the token value appears in clear-text. Prefer using authorization . Deprecated: this will be removed in a future release. bearerTokenFile string File from which to read bearer token for the URL. Deprecated: this will be removed in a future release. Prefer using authorization . enableHTTP2 boolean Whether to enable HTTP2. followRedirects boolean Configure whether HTTP requests follow HTTP 3xx redirects. It requires Prometheus >= v2.26.0. headers object (string) Custom HTTP headers to be sent along with each remote write request. Be aware that headers that are set by Prometheus itself can't be overwritten. It requires Prometheus >= v2.25.0. messageVersion string The Remote Write message's version to use when writing to the endpoint. Version1.0 corresponds to the prometheus.WriteRequest protobuf message introduced in Remote Write 1.0. Version2.0 corresponds to the io.prometheus.write.v2.Request protobuf message introduced in Remote Write 2.0. When Version2.0 is selected, Prometheus will automatically be configured to append the metadata of scraped metrics to the WAL. Before setting this field, consult with your remote storage provider what message version it supports. It requires Prometheus >= v2.54.0. metadataConfig object MetadataConfig configures the sending of series metadata to the remote storage. name string The name of the remote write queue, it must be unique if specified. The name is used in metrics and logging in order to differentiate queues. It requires Prometheus >= v2.15.0. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. oauth2 object OAuth2 configuration for the URL. It requires Prometheus >= v2.27.0. Cannot be set at the same time as sigv4 , authorization , basicAuth , or azureAd . proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. queueConfig object QueueConfig allows tuning of the remote write queue parameters. remoteTimeout string Timeout for requests to the remote write endpoint. sendExemplars boolean Enables sending of exemplars over remote write. Note that exemplar-storage itself must be enabled using the spec.enableFeatures option for exemplars to be scraped in the first place. It requires Prometheus >= v2.27.0. sendNativeHistograms boolean Enables sending of native histograms, also known as sparse histograms over remote write. It requires Prometheus >= v2.40.0. sigv4 object Sigv4 allows to configures AWS's Signature Verification 4 for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as authorization , basicAuth , oauth2 , or azureAd . tlsConfig object TLS Config to use for the URL. url string The URL of the endpoint to send samples to. writeRelabelConfigs array The list of remote write relabel configurations. writeRelabelConfigs[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config 8.1.293. .spec.remoteWrite[].authorization Description Authorization section for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as sigv4 , basicAuth , oauth2 , or azureAd . Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. credentialsFile string File to read a secret from, mutually exclusive with credentials . type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 8.1.294. .spec.remoteWrite[].authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.295. .spec.remoteWrite[].azureAd Description AzureAD for the URL. It requires Prometheus >= v2.45.0. Cannot be set at the same time as authorization , basicAuth , oauth2 , or sigv4 . Type object Property Type Description cloud string The Azure Cloud. Options are 'AzurePublic', 'AzureChina', or 'AzureGovernment'. managedIdentity object ManagedIdentity defines the Azure User-assigned Managed identity. Cannot be set at the same time as oauth or sdk . oauth object OAuth defines the oauth config that is being used to authenticate. Cannot be set at the same time as managedIdentity or sdk . It requires Prometheus >= v2.48.0. sdk object SDK defines the Azure SDK config that is being used to authenticate. See https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication Cannot be set at the same time as oauth or managedIdentity . It requires Prometheus >= 2.52.0. 8.1.296. .spec.remoteWrite[].azureAd.managedIdentity Description ManagedIdentity defines the Azure User-assigned Managed identity. Cannot be set at the same time as oauth or sdk . Type object Required clientId Property Type Description clientId string The client id 8.1.297. .spec.remoteWrite[].azureAd.oauth Description OAuth defines the oauth config that is being used to authenticate. Cannot be set at the same time as managedIdentity or sdk . It requires Prometheus >= v2.48.0. Type object Required clientId clientSecret tenantId Property Type Description clientId string clientID is the clientId of the Azure Active Directory application that is being used to authenticate. clientSecret object clientSecret specifies a key of a Secret containing the client secret of the Azure Active Directory application that is being used to authenticate. tenantId string tenantId is the tenant ID of the Azure Active Directory application that is being used to authenticate. 8.1.298. .spec.remoteWrite[].azureAd.oauth.clientSecret Description clientSecret specifies a key of a Secret containing the client secret of the Azure Active Directory application that is being used to authenticate. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.299. .spec.remoteWrite[].azureAd.sdk Description SDK defines the Azure SDK config that is being used to authenticate. See https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authentication Cannot be set at the same time as oauth or managedIdentity . It requires Prometheus >= 2.52.0. Type object Property Type Description tenantId string tenantId is the tenant ID of the azure active directory application that is being used to authenticate. 8.1.300. .spec.remoteWrite[].basicAuth Description BasicAuth configuration for the URL. Cannot be set at the same time as sigv4 , authorization , oauth2 , or azureAd . Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 8.1.301. .spec.remoteWrite[].basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.302. .spec.remoteWrite[].basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.303. .spec.remoteWrite[].metadataConfig Description MetadataConfig configures the sending of series metadata to the remote storage. Type object Property Type Description send boolean Defines whether metric metadata is sent to the remote storage or not. sendInterval string Defines how frequently metric metadata is sent to the remote storage. 8.1.304. .spec.remoteWrite[].oauth2 Description OAuth2 configuration for the URL. It requires Prometheus >= v2.27.0. Cannot be set at the same time as sigv4 , authorization , basicAuth , or azureAd . Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. noProxy string noProxy is a comma-separated string that can contain IPs, CIDR notation, domain names that should be excluded from proxying. IP and domain names can contain port numbers. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader object ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyConnectHeader{} array proxyConnectHeader{}[] object SecretKeySelector selects a key of a Secret. proxyFromEnvironment boolean Whether to use the proxy configuration defined by environment variables (HTTP_PROXY, HTTPS_PROXY, and NO_PROXY). It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. proxyUrl string proxyURL defines the HTTP proxy server to use. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tlsConfig object TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. tokenUrl string tokenURL configures the URL to fetch the token from. 8.1.305. .spec.remoteWrite[].oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.306. .spec.remoteWrite[].oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.307. .spec.remoteWrite[].oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.308. .spec.remoteWrite[].oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.309. .spec.remoteWrite[].oauth2.proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 8.1.310. .spec.remoteWrite[].oauth2.proxyConnectHeader{} Description Type array 8.1.311. .spec.remoteWrite[].oauth2.proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.312. .spec.remoteWrite[].oauth2.tlsConfig Description TLS configuration to use when connecting to the OAuth2 server. It requires Prometheus >= v2.43.0. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 8.1.313. .spec.remoteWrite[].oauth2.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.314. .spec.remoteWrite[].oauth2.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.315. .spec.remoteWrite[].oauth2.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.316. .spec.remoteWrite[].oauth2.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.317. .spec.remoteWrite[].oauth2.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.318. .spec.remoteWrite[].oauth2.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.319. .spec.remoteWrite[].oauth2.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.320. .spec.remoteWrite[].proxyConnectHeader Description ProxyConnectHeader optionally specifies headers to send to proxies during CONNECT requests. It requires Prometheus >= v2.43.0 or Alertmanager >= 0.25.0. Type object 8.1.321. .spec.remoteWrite[].proxyConnectHeader{} Description Type array 8.1.322. .spec.remoteWrite[].proxyConnectHeader{}[] Description SecretKeySelector selects a key of a Secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.323. .spec.remoteWrite[].queueConfig Description QueueConfig allows tuning of the remote write queue parameters. Type object Property Type Description batchSendDeadline string BatchSendDeadline is the maximum time a sample will wait in buffer. capacity integer Capacity is the number of samples to buffer per shard before we start dropping them. maxBackoff string MaxBackoff is the maximum retry delay. maxRetries integer MaxRetries is the maximum number of times to retry a batch on recoverable errors. maxSamplesPerSend integer MaxSamplesPerSend is the maximum number of samples per send. maxShards integer MaxShards is the maximum number of shards, i.e. amount of concurrency. minBackoff string MinBackoff is the initial retry delay. Gets doubled for every retry. minShards integer MinShards is the minimum number of shards, i.e. amount of concurrency. retryOnRateLimit boolean Retry upon receiving a 429 status code from the remote-write storage. This is an experimental feature , it may change in any upcoming release in a breaking way. sampleAgeLimit string SampleAgeLimit drops samples older than the limit. It requires Prometheus >= v2.50.0. 8.1.324. .spec.remoteWrite[].sigv4 Description Sigv4 allows to configures AWS's Signature Verification 4 for the URL. It requires Prometheus >= v2.26.0. Cannot be set at the same time as authorization , basicAuth , oauth2 , or azureAd . Type object Property Type Description accessKey object AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. profile string Profile is the named AWS profile used to authenticate. region string Region is the AWS region. If blank, the region from the default credentials chain used. roleArn string RoleArn is the named AWS profile used to authenticate. secretKey object SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. 8.1.325. .spec.remoteWrite[].sigv4.accessKey Description AccessKey is the AWS API key. If not specified, the environment variable AWS_ACCESS_KEY_ID is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.326. .spec.remoteWrite[].sigv4.secretKey Description SecretKey is the AWS API secret. If not specified, the environment variable AWS_SECRET_ACCESS_KEY is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.327. .spec.remoteWrite[].tlsConfig Description TLS Config to use for the URL. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 8.1.328. .spec.remoteWrite[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.329. .spec.remoteWrite[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.330. .spec.remoteWrite[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.331. .spec.remoteWrite[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.332. .spec.remoteWrite[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.333. .spec.remoteWrite[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.334. .spec.remoteWrite[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.335. .spec.remoteWrite[].writeRelabelConfigs Description The list of remote write relabel configurations. Type array 8.1.336. .spec.remoteWrite[].writeRelabelConfigs[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 8.1.337. .spec.resources Description Defines the resources requests and limits of the 'prometheus' container. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.338. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.339. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. request string Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request. 8.1.340. .spec.ruleNamespaceSelector Description Namespaces to match for PrometheusRule discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.341. .spec.ruleNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.342. .spec.ruleNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.343. .spec.ruleSelector Description PrometheusRule objects to be selected for rule evaluation. An empty label selector matches all objects. A null label selector matches no objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.344. .spec.ruleSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.345. .spec.ruleSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.346. .spec.rules Description Defines the configuration of the Prometheus rules' engine. Type object Property Type Description alert object Defines the parameters of the Prometheus rules' engine. Any update to these parameters trigger a restart of the pods. 8.1.347. .spec.rules.alert Description Defines the parameters of the Prometheus rules' engine. Any update to these parameters trigger a restart of the pods. Type object Property Type Description forGracePeriod string Minimum duration between alert and restored 'for' state. This is maintained only for alerts with a configured 'for' time greater than the grace period. forOutageTolerance string Max time to tolerate prometheus outage for restoring 'for' state of alert. resendDelay string Minimum amount of time to wait before resending an alert to Alertmanager. 8.1.348. .spec.runtime Description RuntimeConfig configures the values for the Prometheus process behavior Type object Property Type Description goGC integer The Go garbage collection target percentage. Lowering this number may increase the CPU usage. See: https://tip.golang.org/doc/gc-guide#GOGC 8.1.349. .spec.scrapeClasses Description List of scrape classes to expose to scraping objects such as PodMonitors, ServiceMonitors, Probes and ScrapeConfigs. This is an experimental feature , it may change in any upcoming release in a breaking way. Type array 8.1.350. .spec.scrapeClasses[] Description Type object Required name Property Type Description attachMetadata object AttachMetadata configures additional metadata to the discovered targets. When the scrape object defines its own configuration, it takes precedence over the scrape class configuration. default boolean Default indicates that the scrape applies to all scrape objects that don't configure an explicit scrape class name. Only one scrape class can be set as the default. metricRelabelings array MetricRelabelings configures the relabeling rules to apply to all samples before ingestion. The Operator adds the scrape class metric relabelings defined here. Then the Operator adds the target-specific metric relabelings defined in ServiceMonitors, PodMonitors, Probes and ScrapeConfigs. Then the Operator adds namespace enforcement relabeling rule, specified in '.spec.enforcedNamespaceLabel'. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs metricRelabelings[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config name string Name of the scrape class. relabelings array Relabelings configures the relabeling rules to apply to all scrape targets. The Operator automatically adds relabelings for a few standard Kubernetes fields like __meta_kubernetes_namespace and \__meta_kubernetes_service_name . Then the Operator adds the scrape class relabelings defined here. Then the Operator adds the target-specific relabelings defined in the scrape object. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config relabelings[] object RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config tlsConfig object TLSConfig defines the TLS settings to use for the scrape. When the scrape objects define their own CA, certificate and/or key, they take precedence over the corresponding scrape class fields. For now only the caFile , certFile and keyFile fields are supported. 8.1.351. .spec.scrapeClasses[].attachMetadata Description AttachMetadata configures additional metadata to the discovered targets. When the scrape object defines its own configuration, it takes precedence over the scrape class configuration. Type object Property Type Description node boolean When set to true, Prometheus attaches node metadata to the discovered targets. The Prometheus service account must have the list and watch permissions on the Nodes objects. 8.1.352. .spec.scrapeClasses[].metricRelabelings Description MetricRelabelings configures the relabeling rules to apply to all samples before ingestion. The Operator adds the scrape class metric relabelings defined here. Then the Operator adds the target-specific metric relabelings defined in ServiceMonitors, PodMonitors, Probes and ScrapeConfigs. Then the Operator adds namespace enforcement relabeling rule, specified in '.spec.enforcedNamespaceLabel'. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs Type array 8.1.353. .spec.scrapeClasses[].metricRelabelings[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 8.1.354. .spec.scrapeClasses[].relabelings Description Relabelings configures the relabeling rules to apply to all scrape targets. The Operator automatically adds relabelings for a few standard Kubernetes fields like __meta_kubernetes_namespace and \__meta_kubernetes_service_name . Then the Operator adds the scrape class relabelings defined here. Then the Operator adds the target-specific relabelings defined in the scrape object. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type array 8.1.355. .spec.scrapeClasses[].relabelings[] Description RelabelConfig allows dynamic rewriting of the label set for targets, alerts, scraped samples and remote write samples. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config Type object Property Type Description action string Action to perform based on the regex matching. Uppercase and Lowercase actions require Prometheus >= v2.36.0. DropEqual and KeepEqual actions require Prometheus >= v2.41.0. Default: "Replace" modulus integer Modulus to take of the hash of the source label values. Only applicable when the action is HashMod . regex string Regular expression against which the extracted value is matched. replacement string Replacement value against which a Replace action is performed if the regular expression matches. Regex capture groups are available. separator string Separator is the string between concatenated SourceLabels. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured Separator and matched against the configured regular expression. targetLabel string Label to which the resulting string is written in a replacement. It is mandatory for Replace , HashMod , Lowercase , Uppercase , KeepEqual and DropEqual actions. Regex capture groups are available. 8.1.356. .spec.scrapeClasses[].tlsConfig Description TLSConfig defines the TLS settings to use for the scrape. When the scrape objects define their own CA, certificate and/or key, they take precedence over the corresponding scrape class fields. For now only the caFile , certFile and keyFile fields are supported. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 8.1.357. .spec.scrapeClasses[].tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.358. .spec.scrapeClasses[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.359. .spec.scrapeClasses[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.360. .spec.scrapeClasses[].tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.361. .spec.scrapeClasses[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.362. .spec.scrapeClasses[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.363. .spec.scrapeClasses[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.364. .spec.scrapeConfigNamespaceSelector Description Namespaces to match for ScrapeConfig discovery. An empty label selector matches all namespaces. A null label selector matches the current namespace only. Note that the ScrapeConfig custom resource definition is currently at Alpha level. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.365. .spec.scrapeConfigNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.366. .spec.scrapeConfigNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.367. .spec.scrapeConfigSelector Description ScrapeConfigs to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Note that the ScrapeConfig custom resource definition is currently at Alpha level. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.368. .spec.scrapeConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.369. .spec.scrapeConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.370. .spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. Type object Property Type Description appArmorProfile object appArmorProfile is the AppArmor options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID and fsGroup (if specified). If the SupplementalGroupsPolicy feature is enabled, the supplementalGroupsPolicy field determines whether these are in addition to or instead of any group memberships defined in the container image. If unspecified, no additional groups are added, though group memberships defined in the container image may still be used, depending on the supplementalGroupsPolicy field. Note that this field cannot be set when spec.os.name is windows. supplementalGroupsPolicy string Defines how supplemental groups of the first container processes are calculated. Valid values are "Merge" and "Strict". If not specified, "Merge" is used. (Alpha) Using the field requires the SupplementalGroupsPolicy feature gate to be enabled and the container runtime must implement support for this feature. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 8.1.371. .spec.securityContext.appArmorProfile Description appArmorProfile is the AppArmor options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile loaded on the node that should be used. The profile must be preconfigured on the node to work. Must match the loaded name of the profile. Must be set if and only if type is "Localhost". type string type indicates which kind of AppArmor profile will be applied. Valid options are: Localhost - a profile pre-loaded on the node. RuntimeDefault - the container runtime's default profile. Unconfined - no AppArmor enforcement. 8.1.372. .spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 8.1.373. .spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 8.1.374. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 8.1.375. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 8.1.376. .spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 8.1.377. .spec.serviceMonitorNamespaceSelector Description Namespaces to match for ServicedMonitors discovery. An empty label selector matches all namespaces. A null label selector (default value) matches the current namespace only. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.378. .spec.serviceMonitorNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.379. .spec.serviceMonitorNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.380. .spec.serviceMonitorSelector Description ServiceMonitors to be selected for target discovery. An empty label selector matches all objects. A null label selector matches no objects. If spec.serviceMonitorSelector , spec.podMonitorSelector , spec.probeSelector and spec.scrapeConfigSelector are null, the Prometheus configuration is unmanaged. The Prometheus operator will ensure that the Prometheus configuration's Secret exists, but it is the responsibility of the user to provide the raw gzipped Prometheus configuration under the prometheus.yaml.gz key. This behavior is deprecated and will be removed in the major version of the custom resource definition. It is recommended to use spec.additionalScrapeConfigs instead. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.381. .spec.serviceMonitorSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.382. .spec.serviceMonitorSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.383. .spec.storage Description Storage defines the storage used by Prometheus. Type object Property Type Description disableMountSubPath boolean Deprecated: subPath usage will be removed in a future release. emptyDir object EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ephemeral object EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes volumeClaimTemplate object Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. 8.1.384. .spec.storage.emptyDir Description EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 8.1.385. .spec.storage.ephemeral Description EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 8.1.386. .spec.storage.ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 8.1.387. .spec.storage.ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 8.1.388. .spec.storage.ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 8.1.389. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 8.1.390. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 8.1.391. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.392. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.393. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.394. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.395. .spec.storage.volumeClaimTemplate Description Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object EmbeddedMetadata contains metadata relevant to an EmbeddedResource. spec object Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status object Deprecated: this field is never set. 8.1.396. .spec.storage.volumeClaimTemplate.metadata Description EmbeddedMetadata contains metadata relevant to an EmbeddedResource. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 8.1.397. .spec.storage.volumeClaimTemplate.spec Description Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 8.1.398. .spec.storage.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 8.1.399. .spec.storage.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 8.1.400. .spec.storage.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.401. .spec.storage.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.402. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.403. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.404. .spec.storage.volumeClaimTemplate.status Description Deprecated: this field is never set. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources integer-or-string allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity integer-or-string capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc currentVolumeAttributesClassName string currentVolumeAttributesClassName is the current name of the VolumeAttributesClass the PVC is using. When unset, there is no VolumeAttributeClass applied to this PersistentVolumeClaim This is a beta field and requires enabling VolumeAttributesClass feature (off by default). modifyVolumeStatus object ModifyVolumeStatus represents the status object of ControllerModifyVolume operation. When this is unset, there is no ModifyVolume operation being attempted. This is a beta field and requires enabling VolumeAttributesClass feature (off by default). phase string phase represents the current phase of PersistentVolumeClaim. 8.1.405. .spec.storage.volumeClaimTemplate.status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'. Type array 8.1.406. .spec.storage.volumeClaimTemplate.status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required status type Property Type Description lastProbeTime string lastProbeTime is the time we probed the condition. lastTransitionTime string lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "Resizing" that means the underlying persistent volume is being resized. status string type string PersistentVolumeClaimConditionType defines the condition of PV claim. Valid values are: - "Resizing", "FileSystemResizePending" If RecoverVolumeExpansionFailure feature gate is enabled, then following additional values can be expected: - "ControllerResizeError", "NodeResizeError" If VolumeAttributesClass feature gate is enabled, then following additional values can be expected: - "ModifyVolumeError", "ModifyingVolume" 8.1.407. .spec.storage.volumeClaimTemplate.status.modifyVolumeStatus Description ModifyVolumeStatus represents the status object of ControllerModifyVolume operation. When this is unset, there is no ModifyVolume operation being attempted. This is a beta field and requires enabling VolumeAttributesClass feature (off by default). Type object Required status Property Type Description status string status is the status of the ControllerModifyVolume operation. It can be in any of following states: - Pending Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing. - InProgress InProgress indicates that the volume is being modified. - Infeasible Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified. Note: New statuses can be added in the future. Consumers should check for unknown statuses and fail appropriately. targetVolumeAttributesClassName string targetVolumeAttributesClassName is the name of the VolumeAttributesClass the PVC currently being reconciled 8.1.408. .spec.thanos Description Defines the configuration of the optional Thanos sidecar. Type object Property Type Description additionalArgs array AdditionalArgs allows setting additional arguments for the Thanos container. The arguments are passed as-is to the Thanos container which may cause issues if they are invalid or not supported the given Thanos version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged. additionalArgs[] object Argument as part of the AdditionalArgs list. baseImage string Deprecated: use 'image' instead. blockSize string BlockDuration controls the size of TSDB blocks produced by Prometheus. The default value is 2h to match the upstream Prometheus defaults. WARNING: Changing the block duration can impact the performance and efficiency of the entire Prometheus/Thanos stack due to how it interacts with memory and Thanos compactors. It is recommended to keep this value set to a multiple of 120 times your longest scrape or rule interval. For example, 30s * 120 = 1h. getConfigInterval string How often to retrieve the Prometheus configuration. getConfigTimeout string Maximum time to wait when retrieving the Prometheus configuration. grpcListenLocal boolean When true, the Thanos sidecar listens on the loopback interface instead of the Pod IP's address for the gRPC endpoints. It has no effect if listenLocal is true. grpcServerTlsConfig object Configures the TLS parameters for the gRPC server providing the StoreAPI. Note: Currently only the caFile , certFile , and keyFile fields are supported. httpListenLocal boolean When true, the Thanos sidecar listens on the loopback interface instead of the Pod IP's address for the HTTP endpoints. It has no effect if listenLocal is true. image string Container image name for Thanos. If specified, it takes precedence over the spec.thanos.baseImage , spec.thanos.tag and spec.thanos.sha fields. Specifying spec.thanos.version is still necessary to ensure the Prometheus Operator knows which version of Thanos is being configured. If neither spec.thanos.image nor spec.thanos.baseImage are defined, the operator will use the latest upstream version of Thanos available at the time when the operator was released. listenLocal boolean Deprecated: use grpcListenLocal and httpListenLocal instead. logFormat string Log format for the Thanos sidecar. logLevel string Log level for the Thanos sidecar. minTime string Defines the start of time range limit served by the Thanos sidecar's StoreAPI. The field's value should be a constant time in RFC3339 format or a time duration relative to current time, such as -1d or 2h45m. Valid duration units are ms, s, m, h, d, w, y. objectStorageConfig object Defines the Thanos sidecar's configuration to upload TSDB blocks to object storage. More info: https://thanos.io/tip/thanos/storage.md/ objectStorageConfigFile takes precedence over this field. objectStorageConfigFile string Defines the Thanos sidecar's configuration file to upload TSDB blocks to object storage. More info: https://thanos.io/tip/thanos/storage.md/ This field takes precedence over objectStorageConfig. readyTimeout string ReadyTimeout is the maximum time that the Thanos sidecar will wait for Prometheus to start. resources object Defines the resources requests and limits of the Thanos sidecar. sha string Deprecated: use 'image' instead. The image digest can be specified as part of the image name. tag string Deprecated: use 'image' instead. The image's tag can be specified as as part of the image name. tracingConfig object Defines the tracing configuration for the Thanos sidecar. tracingConfigFile takes precedence over this field. More info: https://thanos.io/tip/thanos/tracing.md/ This is an experimental feature , it may change in any upcoming release in a breaking way. tracingConfigFile string Defines the tracing configuration file for the Thanos sidecar. This field takes precedence over tracingConfig . More info: https://thanos.io/tip/thanos/tracing.md/ This is an experimental feature , it may change in any upcoming release in a breaking way. version string Version of Thanos being deployed. The operator uses this information to generate the Prometheus StatefulSet + configuration files. If not specified, the operator assumes the latest upstream release of Thanos available at the time when the version of the operator was released. volumeMounts array VolumeMounts allows configuration of additional VolumeMounts for Thanos. VolumeMounts specified will be appended to other VolumeMounts in the 'thanos-sidecar' container. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. 8.1.409. .spec.thanos.additionalArgs Description AdditionalArgs allows setting additional arguments for the Thanos container. The arguments are passed as-is to the Thanos container which may cause issues if they are invalid or not supported the given Thanos version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument, the reconciliation will fail and an error will be logged. Type array 8.1.410. .spec.thanos.additionalArgs[] Description Argument as part of the AdditionalArgs list. Type object Required name Property Type Description name string Name of the argument, e.g. "scrape.discovery-reload-interval". value string Argument value, e.g. 30s. Can be empty for name-only arguments (e.g. --storage.tsdb.no-lockfile) 8.1.411. .spec.thanos.grpcServerTlsConfig Description Configures the TLS parameters for the gRPC server providing the StoreAPI. Note: Currently only the caFile , certFile , and keyFile fields are supported. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 8.1.412. .spec.thanos.grpcServerTlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.413. .spec.thanos.grpcServerTlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.414. .spec.thanos.grpcServerTlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.415. .spec.thanos.grpcServerTlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.416. .spec.thanos.grpcServerTlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.417. .spec.thanos.grpcServerTlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.418. .spec.thanos.grpcServerTlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.419. .spec.thanos.objectStorageConfig Description Defines the Thanos sidecar's configuration to upload TSDB blocks to object storage. More info: https://thanos.io/tip/thanos/storage.md/ objectStorageConfigFile takes precedence over this field. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.420. .spec.thanos.resources Description Defines the resources requests and limits of the Thanos sidecar. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.421. .spec.thanos.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 8.1.422. .spec.thanos.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. request string Request is the name chosen for a request in the referenced claim. If empty, everything from the claim is made available, otherwise only the result of this request. 8.1.423. .spec.thanos.tracingConfig Description Defines the tracing configuration for the Thanos sidecar. tracingConfigFile takes precedence over this field. More info: https://thanos.io/tip/thanos/tracing.md/ This is an experimental feature , it may change in any upcoming release in a breaking way. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.424. .spec.thanos.volumeMounts Description VolumeMounts allows configuration of additional VolumeMounts for Thanos. VolumeMounts specified will be appended to other VolumeMounts in the 'thanos-sidecar' container. Type array 8.1.425. .spec.thanos.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 8.1.426. .spec.tolerations Description Defines the Pods' tolerations if specified. Type array 8.1.427. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 8.1.428. .spec.topologySpreadConstraints Description Defines the pod's topology spread constraints if specified. Type array 8.1.429. .spec.topologySpreadConstraints[] Description Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description additionalLabelSelectors string Defines what Prometheus Operator managed labels should be added to labelSelector on the topologySpreadConstraint. labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 8.1.430. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.431. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.432. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.433. .spec.tracingConfig Description TracingConfig configures tracing in Prometheus. This is an experimental feature , it may change in any upcoming release in a breaking way. Type object Required endpoint Property Type Description clientType string Client used to export the traces. Supported values are http or grpc . compression string Compression key for supported compression types. The only supported value is gzip . endpoint string Endpoint to send the traces to. Should be provided in format <host>:<port>. headers object (string) Key-value pairs to be used as headers associated with gRPC or HTTP requests. insecure boolean If disabled, the client will use a secure connection. samplingFraction integer-or-string Sets the probability a given trace will be sampled. Must be a float from 0 through 1. timeout string Maximum time the exporter will wait for each batch export. tlsConfig object TLS Config to use when sending traces. 8.1.434. .spec.tracingConfig.tlsConfig Description TLS Config to use when sending traces. Type object Property Type Description ca object Certificate authority used when verifying server certificates. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Client certificate to present when doing client-authentication. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. maxVersion string Maximum acceptable TLS version. It requires Prometheus >= v2.41.0. minVersion string Minimum acceptable TLS version. It requires Prometheus >= v2.35.0. serverName string Used to verify the hostname for the targets. 8.1.435. .spec.tracingConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.436. .spec.tracingConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.437. .spec.tracingConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.438. .spec.tracingConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.439. .spec.tracingConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.440. .spec.tracingConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.441. .spec.tracingConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.442. .spec.tsdb Description Defines the runtime reloadable configuration of the timeseries database(TSDB). It requires Prometheus >= v2.39.0 or PrometheusAgent >= v2.54.0. Type object Property Type Description outOfOrderTimeWindow string Configures how old an out-of-order/out-of-bounds sample can be with respect to the TSDB max time. An out-of-order/out-of-bounds sample is ingested into the TSDB as long as the timestamp of the sample is >= (TSDB.MaxTime - outOfOrderTimeWindow). This is an experimental feature , it may change in any upcoming release in a breaking way. It requires Prometheus >= v2.39.0 or PrometheusAgent >= v2.54.0. 8.1.443. .spec.volumeMounts Description VolumeMounts allows the configuration of additional VolumeMounts. VolumeMounts will be appended to other VolumeMounts in the 'prometheus' container, that are generated as a result of StorageSpec objects. Type array 8.1.444. .spec.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. When RecursiveReadOnly is set to IfPossible or to Enabled, MountPropagation must be None or unspecified (which defaults to None). name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. recursiveReadOnly string RecursiveReadOnly specifies whether read-only mounts should be handled recursively. If ReadOnly is false, this field has no meaning and must be unspecified. If ReadOnly is true, and this field is set to Disabled, the mount is not made recursively read-only. If this field is set to IfPossible, the mount is made recursively read-only, if it is supported by the container runtime. If this field is set to Enabled, the mount is made recursively read-only if it is supported by the container runtime, otherwise the pod will not be started and an error will be generated to indicate the reason. If this field is set to IfPossible or Enabled, MountPropagation must be set to None (or be unspecified, which defaults to None). If this field is not specified, it is treated as an equivalent of Disabled. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 8.1.445. .spec.volumes Description Volumes allows the configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. Type array 8.1.446. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath image object image represents an OCI object (a container image or artifact) pulled and mounted on the kubelet's host machine. The volume is resolved at pod startup depending on which PullPolicy value is provided: - Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. - Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. - IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails. The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[ ].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[ ].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 8.1.447. .spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 8.1.448. .spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 8.1.449. .spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 8.1.450. .spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 8.1.451. .spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 8.1.452. .spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 8.1.453. .spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 8.1.454. .spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 8.1.455. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 8.1.456. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 8.1.457. .spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 8.1.458. .spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 8.1.459. .spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 8.1.460. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 8.1.461. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 8.1.462. .spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 8.1.463. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 8.1.464. .spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 8.1.465. .spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 8.1.466. .spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 8.1.467. .spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 8.1.468. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 8.1.469. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 8.1.470. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 8.1.471. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 8.1.472. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.473. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.474. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.475. .spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 8.1.476. .spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 8.1.477. .spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 8.1.478. .spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 8.1.479. .spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 8.1.480. .spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 8.1.481. .spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 8.1.482. .spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 8.1.483. .spec.volumes[].image Description image represents an OCI object (a container image or artifact) pulled and mounted on the kubelet's host machine. The volume is resolved at pod startup depending on which PullPolicy value is provided: Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails. The volume gets re-resolved if the pod gets deleted and recreated, which means that new remote content will become available on pod recreation. A failure to resolve or pull the image during pod startup will block containers from starting and may add significant latency. Failures will be retried using normal volume backoff and will be reported on the pod reason and message. The types of objects that may be mounted by this volume are defined by the container runtime implementation on a host machine and at minimum must include all valid types supported by the container image field. The OCI object gets mounted in a single directory (spec.containers[ ].volumeMounts.mountPath) by merging the manifest layers in the same way as for container images. The volume will be mounted read-only (ro) and non-executable files (noexec). Sub path mounts for containers are not supported (spec.containers[ ].volumeMounts.subpath). The field spec.securityContext.fsGroupChangePolicy has no effect on this volume type. Type object Property Type Description pullPolicy string Policy for pulling OCI objects. Possible values are: Always: the kubelet always attempts to pull the reference. Container creation will fail If the pull fails. Never: the kubelet never pulls the reference and only uses a local image or artifact. Container creation will fail if the reference isn't present. IfNotPresent: the kubelet pulls if the reference isn't already present on disk. Container creation will fail if the reference isn't present and the pull fails. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. reference string Required: Image or artifact reference to be used. Behaves in the same way as pod.spec.containers[*].image. Pull secrets will be assembled in the same way as for the container image by looking up node credentials, SA image pull secrets, and pod spec image pull secrets. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. 8.1.484. .spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 8.1.485. .spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 8.1.486. .spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 8.1.487. .spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 8.1.488. .spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 8.1.489. .spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 8.1.490. .spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections. Each entry in this list handles one source. sources[] object Projection that may be projected along with other supported volume types. Exactly one of these fields must be set. 8.1.491. .spec.volumes[].projected.sources Description sources is the list of volume projections. Each entry in this list handles one source. Type array 8.1.492. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types. Exactly one of these fields must be set. Type object Property Type Description clusterTrustBundle object ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 8.1.493. .spec.volumes[].projected.sources[].clusterTrustBundle Description ClusterTrustBundle allows a pod to access the .spec.trustBundle field of ClusterTrustBundle objects in an auto-updating file. Alpha, gated by the ClusterTrustBundleProjection feature gate. ClusterTrustBundle objects can either be selected by name, or by the combination of signer name and a label selector. Kubelet performs aggressive normalization of the PEM contents written into the pod filesystem. Esoteric PEM features such as inter-block comments and block headers are stripped. Certificates are deduplicated. The ordering of certificates within the file is arbitrary, and Kubelet may change the order over time. Type object Required path Property Type Description labelSelector object Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". name string Select a single ClusterTrustBundle by object name. Mutually-exclusive with signerName and labelSelector. optional boolean If true, don't block pod startup if the referenced ClusterTrustBundle(s) aren't available. If using name, then the named ClusterTrustBundle is allowed not to exist. If using signerName, then the combination of signerName and labelSelector is allowed to match zero ClusterTrustBundles. path string Relative path from the volume root to write the bundle. signerName string Select all ClusterTrustBundles that match this signer name. Mutually-exclusive with name. The contents of all selected ClusterTrustBundles will be unified and deduplicated. 8.1.494. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector Description Select all ClusterTrustBundles that match this label selector. Only has effect if signerName is set. Mutually-exclusive with name. If unset, interpreted as "match nothing". If set but empty, interpreted as "match everything". Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 8.1.495. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 8.1.496. .spec.volumes[].projected.sources[].clusterTrustBundle.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 8.1.497. .spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 8.1.498. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 8.1.499. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 8.1.500. .spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 8.1.501. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 8.1.502. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 8.1.503. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name, namespace and uid are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 8.1.504. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 8.1.505. .spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional field specify whether the Secret or its key must be defined 8.1.506. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 8.1.507. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 8.1.508. .spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 8.1.509. .spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 8.1.510. .spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 8.1.511. .spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 8.1.512. .spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 8.1.513. .spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 8.1.514. .spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 8.1.515. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 8.1.516. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 8.1.517. .spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 8.1.518. .spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 8.1.519. .spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 8.1.520. .spec.web Description Defines the configuration of the Prometheus web server. Type object Property Type Description httpConfig object Defines HTTP parameters for web server. maxConnections integer Defines the maximum number of simultaneous connections A zero value means that Prometheus doesn't accept any incoming connection. pageTitle string The prometheus web page title. tlsConfig object Defines the TLS parameters for HTTPS. 8.1.521. .spec.web.httpConfig Description Defines HTTP parameters for web server. Type object Property Type Description headers object List of headers that can be added to HTTP responses. http2 boolean Enable HTTP/2 support. Note that HTTP/2 is only supported with TLS. When TLSConfig is not configured, HTTP/2 will be disabled. Whenever the value of the field changes, a rolling update will be triggered. 8.1.522. .spec.web.httpConfig.headers Description List of headers that can be added to HTTP responses. Type object Property Type Description contentSecurityPolicy string Set the Content-Security-Policy header to HTTP responses. Unset if blank. strictTransportSecurity string Set the Strict-Transport-Security header to HTTP responses. Unset if blank. Please make sure that you use this with care as this header might force browsers to load Prometheus and the other applications hosted on the same domain and subdomains over HTTPS. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security xContentTypeOptions string Set the X-Content-Type-Options header to HTTP responses. Unset if blank. Accepted value is nosniff. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options xFrameOptions string Set the X-Frame-Options header to HTTP responses. Unset if blank. Accepted values are deny and sameorigin. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options xXSSProtection string Set the X-XSS-Protection header to all responses. Unset if blank. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection 8.1.523. .spec.web.tlsConfig Description Defines the TLS parameters for HTTPS. Type object Property Type Description cert object Contains the TLS certificate for the server. certFile string Path to the TLS certificate file in the Prometheus container for the server. Mutually exclusive with cert . cipherSuites array (string) List of supported cipher suites for TLS versions up to TLS 1.2. If empty, Go default cipher suites are used. Available cipher suites are documented in the go documentation: https://golang.org/pkg/crypto/tls/#pkg-constants clientAuthType string Server policy for client authentication. Maps to ClientAuth Policies. For more detail on clientAuth options: https://golang.org/pkg/crypto/tls/#ClientAuthType clientCAFile string Path to the CA certificate file for client certificate authentication to the server. Mutually exclusive with client_ca . client_ca object Contains the CA certificate for client certificate authentication to the server. curvePreferences array (string) Elliptic curves that will be used in an ECDHE handshake, in preference order. Available curves are documented in the go documentation: https://golang.org/pkg/crypto/tls/#CurveID keyFile string Path to the TLS key file in the Prometheus container for the server. Mutually exclusive with keySecret . keySecret object Secret containing the TLS key for the server. maxVersion string Maximum TLS version that is acceptable. Defaults to TLS13. minVersion string Minimum TLS version that is acceptable. Defaults to TLS12. preferServerCipherSuites boolean Controls whether the server selects the client's most preferred cipher suite, or the server's most preferred cipher suite. If true then the server's preference, as expressed in the order of elements in cipherSuites, is used. 8.1.524. .spec.web.tlsConfig.cert Description Contains the TLS certificate for the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.525. .spec.web.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.526. .spec.web.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.527. .spec.web.tlsConfig.client_ca Description Contains the CA certificate for client certificate authentication to the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 8.1.528. .spec.web.tlsConfig.client_ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the ConfigMap or its key must be defined 8.1.529. .spec.web.tlsConfig.client_ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.530. .spec.web.tlsConfig.keySecret Description Secret containing the TLS key for the server. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. This field is effectively required, but due to backwards compatibility is allowed to be empty. Instances of this type with an empty value here are almost certainly wrong. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean Specify whether the Secret or its key must be defined 8.1.531. .status Description Most recent observed status of the Prometheus cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required availableReplicas paused replicas unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this Prometheus deployment. conditions array The current state of the Prometheus deployment. conditions[] object Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. paused boolean Represents whether any actions on the underlying managed objects are being performed. Only delete actions will be performed. replicas integer Total number of non-terminated pods targeted by this Prometheus deployment (their labels match the selector). selector string The selector used to match the pods targeted by this Prometheus resource. shardStatuses array The list has one entry per shard. Each entry provides a summary of the shard status. shardStatuses[] object shards integer Shards is the most recently observed number of shards. unavailableReplicas integer Total number of unavailable pods targeted by this Prometheus deployment. updatedReplicas integer Total number of non-terminated pods targeted by this Prometheus deployment that have the desired version spec. 8.1.532. .status.conditions Description The current state of the Prometheus deployment. Type array 8.1.533. .status.conditions[] Description Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string Human-readable message indicating details for the condition's last transition. observedGeneration integer ObservedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string Reason for the condition's last transition. status string Status of the condition. type string Type of the condition being reported. 8.1.534. .status.shardStatuses Description The list has one entry per shard. Each entry provides a summary of the shard status. Type array 8.1.535. .status.shardStatuses[] Description Type object Required availableReplicas replicas shardID unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this shard. replicas integer Total number of pods targeted by this shard. shardID string Identifier of the shard. unavailableReplicas integer Total number of unavailable pods targeted by this shard. updatedReplicas integer Total number of non-terminated pods targeted by this shard that have the desired spec. 8.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/prometheuses GET : list objects of kind Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses DELETE : delete collection of Prometheus GET : list objects of kind Prometheus POST : create Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name} DELETE : delete Prometheus GET : read the specified Prometheus PATCH : partially update the specified Prometheus PUT : replace the specified Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name}/scale GET : read scale of the specified Prometheus PATCH : partially update scale of the specified Prometheus PUT : replace scale of the specified Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name}/status GET : read status of the specified Prometheus PATCH : partially update status of the specified Prometheus PUT : replace status of the specified Prometheus 8.2.1. /apis/monitoring.coreos.com/v1/prometheuses HTTP method GET Description list objects of kind Prometheus Table 8.1. HTTP responses HTTP code Reponse body 200 - OK PrometheusList schema 401 - Unauthorized Empty 8.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses HTTP method DELETE Description delete collection of Prometheus Table 8.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Prometheus Table 8.3. HTTP responses HTTP code Reponse body 200 - OK PrometheusList schema 401 - Unauthorized Empty HTTP method POST Description create Prometheus Table 8.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.5. Body parameters Parameter Type Description body Prometheus schema Table 8.6. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 201 - Created Prometheus schema 202 - Accepted Prometheus schema 401 - Unauthorized Empty 8.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name} Table 8.7. Global path parameters Parameter Type Description name string name of the Prometheus HTTP method DELETE Description delete Prometheus Table 8.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 8.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Prometheus Table 8.10. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Prometheus Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.12. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Prometheus Table 8.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.14. Body parameters Parameter Type Description body Prometheus schema Table 8.15. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 201 - Created Prometheus schema 401 - Unauthorized Empty 8.2.4. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name}/scale Table 8.16. Global path parameters Parameter Type Description name string name of the Prometheus HTTP method GET Description read scale of the specified Prometheus Table 8.17. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PATCH Description partially update scale of the specified Prometheus Table 8.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.19. HTTP responses HTTP code Reponse body 200 - OK Scale schema 401 - Unauthorized Empty HTTP method PUT Description replace scale of the specified Prometheus Table 8.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.21. Body parameters Parameter Type Description body Scale schema Table 8.22. HTTP responses HTTP code Reponse body 200 - OK Scale schema 201 - Created Scale schema 401 - Unauthorized Empty 8.2.5. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name}/status Table 8.23. Global path parameters Parameter Type Description name string name of the Prometheus HTTP method GET Description read status of the specified Prometheus Table 8.24. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Prometheus Table 8.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.26. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Prometheus Table 8.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.28. Body parameters Parameter Type Description body Prometheus schema Table 8.29. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 201 - Created Prometheus schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring_apis/prometheus-monitoring-coreos-com-v1 |
Chapter 2. Differences from upstream OpenJDK 8 | Chapter 2. Differences from upstream OpenJDK 8 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 8 changes: FIPS support. Red Hat build of OpenJDK 8 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 8 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 8 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation) | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.402/rn-openjdk-diff-from-upstream |
Installing on Azure | Installing on Azure OpenShift Container Platform 4.17 Installing OpenShift Container Platform on Azure Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_azure/index |
Chapter 18. Using webhooks | Chapter 18. Using webhooks A webhook is a way for a web page or web application to provide other applications with information in real time. Webhooks are only triggered after an event occurs. The request usually contains details of the event. An event triggers callbacks, such as sending an e-mail confirming a host has been provisioned. You can use webhooks to define a call to an external API based on Satellite internal event by using a fire-and-forget message exchange pattern. The application sending the request does not wait for the response, or ignores it. Payload of a webhook is created from webhook templates. Webhook templates use the same ERB syntax as Provisioning templates. Available variables: @event_name : Name of an event. @webhook_id : Unique event ID. @payload : Payload data, different for each event type. To access individual fields, use @payload[:key_name] Ruby hash syntax. @payload[:object] : Database object for events triggered by database actions (create, update, delete). Not available for custom events. @payload[:context] : Additional information as hash like request and session UUID, remote IP address, user, organization and location. Because webhooks use HTTP, no new infrastructure needs be added to existing web services. The typical use case for webhooks in Satellite is making a call to a monitoring system when a host is created or deleted. Webhooks are useful where the action you want to perform in the external system can be achieved through its API. Where it is necessary to run additional commands or edit files, the shellhooks plugin for Capsules is available. The shellhooks plugin enables you to define a shell script on the Capsule that can be executed through the API. You can use webhooks successfully without installing the shellhooks plugin. For a list of available events, see Available webhook events . 18.1. Creating a webhook template Webhook templates are used to generate the body of HTTP request to a configured target when a webhook is triggered. Use the following procedure to create a webhook template in the Satellite web UI. Procedure In the Satellite web UI, navigate to Administer > Webhook > Webhook Templates . Click Clone an existing template or Create Template . Enter a name for the template. Use the editor to make changes to the template payload. A webhook HTTP payload must be created using Satellite template syntax. The webhook template can use a special variable called @object that can represent the main object of the event. @object can be missing in case of certain events. You can determine what data are actually available with the @payload variable. For more information, see Template Writing Reference in Managing hosts and for available template macros and methods, visit /templates_doc on Satellite Server. Optional: Enter the description and audit comment. Assign organizations and locations. Click Submit . Examples When creating a webhook template, you must follow the format of the target application for which the template is intended. For example, an application can expect a "text" field with the webhook message. Refer to the documentation of your target application to find more about how your webhook template format should look like. Running remote execution jobs This webhook template defines a message with the ID and result of a remote execution job. The webhook which uses this template can be subscribed to events such as Actions Remote Execution Run Host Job Succeeded or Actions Remote Execution Run Host Job Failed . Creating users This webhook template defines a message with the login and email of a created user. The webhook which uses this template should be subscribed to the User Created event. 18.2. Creating a webhook You can customize events, payloads, HTTP authentication, content type, and headers through the Satellite web UI. Use the following procedure to create a webhook in the Satellite web UI. Procedure In the Satellite web UI, navigate to Administer > Webhook > Webhooks . Click Create new . From the Subscribe to list, select an event. Enter a Name for your webhook. Enter a Target URL . Webhooks make HTTP requests to pre-configured URLs. The target URL can be a dynamic URL. Click Template to select a template. Webhook templates are used to generate the body of the HTTP request to Satellite Server when a webhook is triggered. Enter an HTTP method. Optional: If you do not want activate the webhook when you create it, uncheck the Enabled flag. Click the Credentials tab. Optional: If HTTP authentication is required, enter User and Password . Optional: Uncheck Verify SSL if you do not want to verify the server certificate against the system certificate store or Satellite CA. On the Additional tab, enter the HTTP Content Type . For example, application/json , application/xml or text/plain on the payload you define. The application does not attempt to convert the content to match the specified content type. Optional: Provide HTTP headers as JSON. ERB is also allowed. When configuring webhooks with endpoints with non-standard HTTP or HTTPS ports, an SELinux port must be assigned, see Configuring SELinux to Ensure Access to Satellite on Custom Ports in Installing Satellite Server in a connected network environment . 18.3. Available webhook events The following table contains a list of webhook events that are available from the Satellite web UI. Action events trigger webhooks only on success , so if an action fails, a webhook is not triggered. For more information about payload, go to Administer > About > Support > Templates DSL . A list of available types is provided in the following table. Some events are marked as custom , in that case, the payload is an object object but a Ruby hash (key-value data structure) so syntax is different. Event name Description Payload Actions Katello Content View Promote Succeeded A content view was successfully promoted. Actions::Katello::ContentView::Promote Actions Katello Content View Publish Succeeded A repository was successfully synchronized. Actions::Katello::ContentView::Publish Actions Remote Execution Run Host Job Succeeded A generic remote execution job succeeded for a host. This event is emitted for all Remote Execution jobs, when complete. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Errata Install Succeeded Install errata using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Group Install Succeeded Install package group using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Package Install Succeeded Install package using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Group Remove Remove package group using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Package Remove Succeeded Remove package using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Service Restart Succeeded Restart Services using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Group Update Succeeded Update package group using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Package Update Succeeded Update package using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Foreman OpenSCAP Run Scans Succeeded Run OpenSCAP scan. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Run Host Succeeded Runs an Ansible Playbook containing all the roles defined for a host. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Run Capsule Upgrade Succeeded Upgrade Capsules on given Capsule Servers. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Configure Cloud Connector Succeeded Configure Cloud Connector on given hosts. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Run Insights Plan Succeeded Runs a given maintenance plan from Red Hat Access Insights given an ID. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Run Playbook Succeeded Run an Ansible Playbook against given hosts. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Ansible Enable Web Console Succeeded Run an Ansible Playbook to enable the web console on given hosts. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Puppet Run Host Succeeded Perform a single Puppet run. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Katello Module Stream Action Succeeded Perform a module stream action using the Katello interface. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Leapp Pre-upgrade Succeeded Upgradeability check for RHEL 7 host. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Leapp Remediation Plan Succeeded Run Remediation plan with Leapp. Actions::RemoteExecution::RunHostJob Actions Remote Execution Run Host Job Leapp Upgrade Succeeded Run Leapp upgrade job for RHEL 7 host. Actions::RemoteExecution::RunHostJob Build Entered A host entered the build mode. Custom event: @payload[:id] (host id), @payload[:hostname] (host name). Build Exited A host build mode was canceled, either it was successfully provisioned or the user canceled the build manually. Custom event: @payload[:id] (host id), @payload[:hostname] (host name). Content View Created/Updated/Destroyed Common database operations on a content view. Katello::ContentView Domain Created/Updated/Destroyed Common database operations on a domain. Domain Host Created/Updated/Destroyed Common database operations on a host. Host Hostgroup Created/Updated/Destroyed Common database operations on a hostgroup. Hostgroup Model Created/Updated/Destroyed Common database operations on a model. Model Status Changed Global host status of a host changed. Custom event: @payload[:id] (host id), @payload[:hostname] , @payload[:global_status] (hash) Subnet Created/Updated/Destroyed Common database operations on a subnet. Subnet Template Render Performed A report template was rendered. Template User Created/Updated/Destroyed Common database operations on a user. User 18.4. Shellhooks With webhooks, you can only map one Satellite event to one API call. For advanced integrations, where a single shell script can contain multiple commands, you can install a Capsule shellhooks plug-in that exposes executables by using a REST HTTP API. You can then configure a webhook to reach out to a Capsule API to run a predefined shellhook. A shellhook is an executable script that can be written in any language provided that it can be executed. The shellhook can for example contain commands or edit files. You must place your executable scripts in /var/lib/foreman-proxy/shellhooks with only alphanumeric characters and underscores in their name. You can pass input to shellhook script through the webhook payload. This input is redirected to standard input of the shellhook script. You can pass arguments to shellhook script by using HTTP headers in format X-Shellhook-Arg-1 to X-Shellhook-Arg-99 . For more information on passing arguments to shellhook script, see: Section 18.6, "Passing arguments to shellhook script using webhooks" Section 18.7, "Passing arguments to shellhook script using curl" The HTTP method must be POST. An example URL would be: https://capsule.example.com:9090/shellhook/My_Script . Note Unlike the shellhooks directory, the URL must contain /shellhook/ in singular to be valid. You must enable Capsule Authorization for each webhook connected to a shellhook to enable it to authorize a call. Standard output and standard error output are redirected to the Capsule logs as messages with debug or warning levels respectively. The shellhook HTTPS calls do not return a value. For an example on creating a shellhook script, see Section 18.8, "Creating a shellhook to print arguments" . 18.5. Installing the shellhooks plugin Optionally, you can install and enable the shellhooks plugin on each Capsule used for shellhooks. Procedure Run the following command: 18.6. Passing arguments to shellhook script using webhooks Use this procedure to pass arguments to a shellhook script using webhooks. Procedure When creating a webhook, on the Additional tab, create HTTP headers in the following format: Ensure that the headers have a valid JSON or ERB format. Only pass safe fields like database ID, name, or labels that do not include new lines or quote characters. For more information, see Section 18.2, "Creating a webhook" . Example 18.7. Passing arguments to shellhook script using curl Use this procedure to pass arguments to a shellhook script using curl. Procedure When executing a shellhook script using curl , create HTTP headers in the following format: "X-Shellhook-Arg-1: VALUE " "X-Shellhook-Arg-2: VALUE " Example 18.8. Creating a shellhook to print arguments Create a simple shellhook script that prints Hello World! when you run a remote execution job. Prerequisites You have the webhooks and shellhooks plugins installed. For more information, see: Section 18.5, "Installing the shellhooks plugin" Procedure Modify the /var/lib/foreman-proxy/shellhooks/print_args script to print arguments to standard error output so you can see them in the Capsule logs: #!/bin/sh # # Prints all arguments to stderr # echo "USD@" >&2 In the Satellite web UI, navigate to Administer > Webhook > Webhooks . Click Create new . From the Subscribe to list, select Actions Remote Execution Run Host Job Succeeded . Enter a Name for your webhook. In the Target URL field, enter the URL of your Capsule Server followed by :9090/shellhook/print_args : Note that shellhook in the URL is singular, unlike the shellhooks directory. From the Template list, select Empty Payload . On the Credentials tab, check Capsule Authorization . On the Additional tab, enter the following text in the Optional HTTP headers field: Click Submit . You now have successfully created a shellhook that prints "Hello World!" to Capsule logs every time you a remote execution job succeeds. Verification Run a remote execution job on any host. You can use time as a command. For more information, see Executing a Remote Job in Managing hosts . Verify that the shellhook script was triggered and printed "Hello World!" to Capsule Server logs: You should find the following lines at the end of the log: | [
"{ \"text\": \"job invocation <%= @object.job_invocation_id %> finished with result <%= @object.task.result %>\" }",
"{ \"text\": \"user with login <%= @object.login %> and email <%= @object.mail %> created\" }",
"satellite-installer --enable-foreman-proxy-plugin-shellhooks",
"{ \"X-Shellhook-Arg-1\": \" VALUE \", \"X-Shellhook-Arg-2\": \" VALUE \" }",
"{ \"X-Shellhook-Arg-1\": \"<%= @object.content_view_version_id %>\", \"X-Shellhook-Arg-2\": \"<%= @object.content_view_name %>\" }",
"\"X-Shellhook-Arg-1: VALUE \" \"X-Shellhook-Arg-2: VALUE \"",
"curl --data \"\" --header \"Content-Type: text/plain\" --header \"X-Shellhook-Arg-1: Version 1.0\" --header \"X-Shellhook-Arg-2: My content view\" --request POST --show-error --silent https://capsule.example.com:9090/shellhook/My_Script",
"#!/bin/sh # Prints all arguments to stderr # echo \"USD@\" >&2",
"https:// capsule.example.com :9090/shellhook/print_args",
"{ \"X-Shellhook-Arg-1\": \"Hello\", \"X-Shellhook-Arg-2\": \"World!\" }",
"tail /var/log/foreman-proxy/proxy.log",
"[I] Started POST /shellhook/print_args [I] Finished POST /shellhook/print_args with 200 (0.33 ms) [I] [3520] Started task /var/lib/foreman-proxy/shellhooks/print_args\\ Hello\\ World\\! [W] [3520] Hello World!"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/using_webhooks_admin |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.3/making-open-source-more-inclusive |
Chapter 4. Refining your view of systems in the advisor service | Chapter 4. Refining your view of systems in the advisor service The Systems view shows all of your systems that have the Insights client installed and reporting advisor data. The Systems list can be refined in the following ways. 4.1. Filter by name Search for the host or system name. 4.2. Sorting options Use the sorting arrows above the following columns to order your systems table: Name. Alphabetize by A to Z or Z to A. Number of recommendations. Order by the number of recommendations impacting each system. Last seen. Order by the number of minutes, hours, or days since an archive was last uploaded from the system to the advisor service. 4.3. Filtering systems by tags, SAP workloads, and groups in the advisor service Filter results in the advisor service UI by custom group tags, SAP workloads, and Satellite groups to quickly locate and view the systems you want to focus on. In the advisor service, access tag, workload, and group filters using the Filter results box, located in the upper left corner of the page in the Red Hat Insights for Red Hat Enterprise Linux application. The filter dropdown menu shows all of the tags associated with the account, allowing you to click one or more parameters by which to filter. To filter by tags in the advisor service, complete the following steps: Procedure Navigate to the Operations > Advisor > Systems page and log in if necessary. The Filter results box is in most views in the Red Hat Insights for Red Hat Enterprise Linux application and these procedures work anywhere you access Filter results . Click the arrow on the Filter results box and scroll to see the tags available for systems on this account. Select one or more tags to filter by SAP workloads, Satellite host group, or a custom group. Applied tags are visible to the Filter results box. View the filtered results throughout the advisor service. To remove the tag, click Clear filters . Additional resources To learn more about system-group tags in Insights for Red Hat Enterprise Linux, see chapter, System tags and groups . | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_rhel_configuration_issues_using_the_red_hat_insights_advisor_service/assembly-adv-assess-refining-system-list |
Chapter 1. Introduction to OpenShift Data Foundation | Chapter 1. Introduction to OpenShift Data Foundation Red Hat OpenShift Data Foundation is a highly integrated collection of cloud storage and data services for Red Hat OpenShift Container Platform. It is available as part of the Red Hat OpenShift Container Platform Service Catalog, packaged as an operator to facilitate simple deployment and management. Red Hat OpenShift Data Foundation services are primarily made available to applications by way of storage classes that represent the following components: Block storage devices, catering primarily to database workloads. Prime examples include Red Hat OpenShift Container Platform logging and monitoring, and PostgreSQL. Important Block storage should be used for any worklaod only when it does not require sharing the data across multiple containers. Shared and distributed file system, catering primarily to software development, messaging, and data aggregation workloads. Examples include Jenkins build sources and artifacts, Wordpress uploaded content, Red Hat OpenShift Container Platform registry, and messaging using JBoss AMQ. Multicloud object storage, featuring a lightweight S3 API endpoint that can abstract the storage and retrieval of data from multiple cloud object stores. On premises object storage, featuring a robust S3 API endpoint that scales to tens of petabytes and billions of objects, primarily targeting data intensive applications. Examples include the storage and access of row, columnar, and semi-structured data with applications like Spark, Presto, Red Hat AMQ Streams (Kafka), and even machine learning frameworks like TensorFlow and Pytorch. Note Running PostgresSQL workload on CephFS persistent volume is not supported and it is recommended to use RADOS Block Device (RBD) volume. For more information, see the knowledgebase solution ODF Database Workloads Must Not Use CephFS PVs/PVCs . Red Hat OpenShift Data Foundation version 4.x integrates a collection of software projects, including: Ceph, providing block storage, a shared and distributed file system, and on-premises object storage Ceph CSI, to manage provisioning and lifecycle of persistent volumes and claims NooBaa, providing a Multicloud Object Gateway OpenShift Data Foundation, Rook-Ceph, and NooBaa operators to initialize and manage OpenShift Data Foundation services. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/red_hat_openshift_data_foundation_architecture/introduction-to-openshift-data-foundation-4_rhodf |
3.2. Grouping API Use Case | 3.2. Grouping API Use Case This feature allows logically related data to be stored on a single node. For example, if the cache contains user information, the information for all users in a single location can be stored on a single node. The benefit of this approach is that when seeking specific (logically related) data, the Distributed Executor task is directed to run only on the relevant node rather than across all nodes in the cluster. Such directed operations result in optimized performance. Example 3.1. Grouping API Example Acme, Inc. is a home appliance company with over one hundred offices worldwide. Some offices house employees from various departments, while certain locations are occupied exclusively by the employees of one or two departments. The Human Resources (HR) department has employees in Bangkok, London, Chicago, Nice and Venice. Acme, Inc. uses Red Hat JBoss Data Grid's Grouping API to ensure that all the employee records for the HR department are moved to a single node (Node AB) in the cache. As a result, when attempting to retrieve a record for a HR employee, the DistributedExecutor only checks node AB and quickly and easily retrieves the required employee records. Storing related entries on a single node as illustrated optimizes the data access and prevents time and resource wastage by seeking information on a single node (or a small subset of nodes) instead of all the nodes in the cluster. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/grouping_api_use_case |
Chapter 9. Log collection and forwarding | Chapter 9. Log collection and forwarding 9.1. About log collection and forwarding The Red Hat OpenShift Logging Operator deploys a collector based on the ClusterLogForwarder resource specification. There are two collector options supported by this Operator: the legacy Fluentd collector, and the Vector collector. Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. 9.1.1. Log collection The log collector is a daemon set that deploys pods to each Red Hat OpenShift Service on AWS node to collect container and node logs. By default, the log collector uses the following sources: System and infrastructure logs generated by journald log messages from the operating system, the container runtime, and Red Hat OpenShift Service on AWS. /var/log/containers/*.log for all container logs. If you configure the log collector to collect audit logs, it collects them from /var/log/audit/audit.log . The log collector collects the logs from these sources and forwards them internally or externally depending on your logging configuration. 9.1.1.1. Log collector types Vector is a log collector offered as an alternative to Fluentd for the logging. You can configure which logging collector type your cluster uses by modifying the ClusterLogging custom resource (CR) collection spec: Example ClusterLogging CR that configures Vector as the collector apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: logs: type: vector vector: {} # ... 9.1.1.2. Log collection limitations The container runtimes provide minimal information to identify the source of log messages: project, pod name, and container ID. This information is not sufficient to uniquely identify the source of the logs. If a pod with a given name and project is deleted before the log collector begins processing its logs, information from the API server, such as labels and annotations, might not be available. There might not be a way to distinguish the log messages from a similarly named pod and project or trace the logs to their source. This limitation means that log collection and normalization are considered best effort . Important The available container runtimes provide minimal information to identify the source of log messages and do not guarantee unique individual log messages or that these messages can be traced to their source. 9.1.1.3. Log collector features by type Table 9.1. Log Sources Feature Fluentd Vector App container logs [✓] [✓] App-specific routing [✓] [✓] App-specific routing by namespace [✓] [✓] Infra container logs [✓] [✓] Infra journal logs [✓] [✓] Kube API audit logs [✓] [✓] OpenShift API audit logs [✓] [✓] Open Virtual Network (OVN) audit logs [✓] [✓] Table 9.2. Authorization and Authentication Feature Fluentd Vector Elasticsearch certificates [✓] [✓] Elasticsearch username / password [✓] [✓] Amazon Cloudwatch keys [✓] [✓] Amazon Cloudwatch STS [✓] [✓] Kafka certificates [✓] [✓] Kafka username / password [✓] [✓] Kafka SASL [✓] [✓] Loki bearer token [✓] [✓] Table 9.3. Normalizations and Transformations Feature Fluentd Vector Viaq data model - app [✓] [✓] Viaq data model - infra [✓] [✓] Viaq data model - infra(journal) [✓] [✓] Viaq data model - Linux audit [✓] [✓] Viaq data model - kube-apiserver audit [✓] [✓] Viaq data model - OpenShift API audit [✓] [✓] Viaq data model - OVN [✓] [✓] Loglevel Normalization [✓] [✓] JSON parsing [✓] [✓] Structured Index [✓] [✓] Multiline error detection [✓] [✓] Multicontainer / split indices [✓] [✓] Flatten labels [✓] [✓] CLF static labels [✓] [✓] Table 9.4. Tuning Feature Fluentd Vector Fluentd readlinelimit [✓] Fluentd buffer [✓] - chunklimitsize [✓] - totallimitsize [✓] - overflowaction [✓] - flushthreadcount [✓] - flushmode [✓] - flushinterval [✓] - retrywait [✓] - retrytype [✓] - retrymaxinterval [✓] - retrytimeout [✓] Table 9.5. Visibility Feature Fluentd Vector Metrics [✓] [✓] Dashboard [✓] [✓] Alerts [✓] [✓] Table 9.6. Miscellaneous Feature Fluentd Vector Global proxy support [✓] [✓] x86 support [✓] [✓] ARM support [✓] [✓] IBM Power(R) support [✓] [✓] IBM Z(R) support [✓] [✓] IPv6 support [✓] [✓] Log event buffering [✓] Disconnected Cluster [✓] [✓] 9.1.1.4. Collector outputs The following collector outputs are supported: Table 9.7. Supported outputs Feature Fluentd Vector Elasticsearch v6-v8 [✓] [✓] Fluent forward [✓] Syslog RFC3164 [✓] [✓] (Logging 5.7+) Syslog RFC5424 [✓] [✓] (Logging 5.7+) Kafka [✓] [✓] Amazon Cloudwatch [✓] [✓] Amazon Cloudwatch STS [✓] [✓] Loki [✓] [✓] HTTP [✓] [✓] (Logging 5.7+) Google Cloud Logging [✓] [✓] Splunk [✓] (Logging 5.6+) 9.1.2. Log forwarding Administrators can create ClusterLogForwarder resources that specify which logs are collected, how they are transformed, and where they are forwarded to. ClusterLogForwarder resources can be used up to forward container, infrastructure, and audit logs to specific endpoints within or outside of a cluster. Transport Layer Security (TLS) is supported so that log forwarders can be configured to send logs securely. Administrators can also authorize RBAC permissions that define which service accounts and users can access and forward which types of logs. 9.1.2.1. Log forwarding implementations There are two log forwarding implementations available: the legacy implementation, and the multi log forwarder feature. Important Only the Vector collector is supported for use with the multi log forwarder feature. The Fluentd collector can only be used with legacy implementations. 9.1.2.1.1. Legacy implementation In legacy implementations, you can only use one log forwarder in your cluster. The ClusterLogForwarder resource in this mode must be named instance , and must be created in the openshift-logging namespace. The ClusterLogForwarder resource also requires a corresponding ClusterLogging resource named instance in the openshift-logging namespace. 9.1.2.1.2. Multi log forwarder feature The multi log forwarder feature is available in logging 5.8 and later, and provides the following functionality: Administrators can control which users are allowed to define log collection and which logs they are allowed to collect. Users who have the required permissions are able to specify additional log collection configurations. Administrators who are migrating from the deprecated Fluentd collector to the Vector collector can deploy a new log forwarder separately from their existing deployment. The existing and new log forwarders can operate simultaneously while workloads are being migrated. In multi log forwarder implementations, you are not required to create a corresponding ClusterLogging resource for your ClusterLogForwarder resource. You can create multiple ClusterLogForwarder resources using any name, in any namespace, with the following exceptions: You cannot create a ClusterLogForwarder resource named instance in the openshift-logging namespace, because this is reserved for a log forwarder that supports the legacy workflow using the Fluentd collector. You cannot create a ClusterLogForwarder resource named collector in the openshift-logging namespace, because this is reserved for the collector. 9.1.2.2. Enabling the multi log forwarder feature for a cluster To use the multi log forwarder feature, you must create a service account and cluster role bindings for that service account. You can then reference the service account in the ClusterLogForwarder resource to control access permissions. Important In order to support multi log forwarding in additional namespaces other than the openshift-logging namespace, you must update the Red Hat OpenShift Logging Operator to watch all namespaces]. This functionality is supported by default in new Red Hat OpenShift Logging Operator version 5.8 installations. 9.1.2.2.1. Authorizing log collection RBAC permissions In logging 5.8 and later, the Red Hat OpenShift Logging Operator provides collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. You can authorize RBAC permissions for log collection by binding the required cluster roles to a service account. Prerequisites The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. You have administrator permissions. Procedure Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. Bind the appropriate cluster roles to the service account: Example binding command USD oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name> Additional resources Using RBAC Authorization Kubernetes documentation 9.2. Log output types Outputs define the destination where logs are sent to from a log forwarder. You can configure multiple types of outputs in the ClusterLogForwarder custom resource (CR) to send logs to servers that support different protocols. 9.2.1. Supported log forwarding outputs Outputs can be any of the following types: Table 9.8. Supported log output types Output type Protocol Tested with Logging versions Supported collector type Elasticsearch v6 HTTP 1.1 6.8.1, 6.8.23 5.6+ Fluentd, Vector Elasticsearch v7 HTTP 1.1 7.12.2, 7.17.7, 7.10.1 5.6+ Fluentd, Vector Elasticsearch v8 HTTP 1.1 8.4.3, 8.6.1 5.6+ Fluentd [1] , Vector Fluent Forward Fluentd forward v1 Fluentd 1.14.6, Logstash 7.10.1, Fluentd 1.14.5 5.4+ Fluentd Google Cloud Logging REST over HTTPS Latest 5.7+ Vector HTTP HTTP 1.1 Fluentd 1.14.6, Vector 0.21 5.7+ Fluentd, Vector Kafka Kafka 0.11 Kafka 2.4.1, 2.7.0, 3.3.1 5.4+ Fluentd, Vector Loki REST over HTTP and HTTPS 2.3.0, 2.5.0, 2.7, 2.2.1 5.4+ Fluentd, Vector Splunk HEC 8.2.9, 9.0.0 5.7+ Vector Syslog RFC3164, RFC5424 Rsyslog 8.37.0-9.el7, rsyslog-8.39.0 5.4+ Fluentd, Vector [2] Amazon CloudWatch REST over HTTPS Latest 5.4+ Fluentd, Vector Fluentd does not support Elasticsearch 8 in the logging version 5.6.2. Vector supports Syslog in the logging version 5.7 and higher. 9.2.2. Output type descriptions default The on-cluster, Red Hat managed log store. You are not required to configure the default output. Note If you configure a default output, you receive an error message, because the default output name is reserved for referencing the on-cluster, Red Hat managed log store. loki Loki, a horizontally scalable, highly available, multi-tenant log aggregation system. kafka A Kafka broker. The kafka output can use a TCP or TLS connection. elasticsearch An external Elasticsearch instance. The elasticsearch output can use a TLS connection. fluentdForward An external log aggregation solution that supports Fluentd. This option uses the Fluentd forward protocols. The fluentForward output can use a TCP or TLS connection and supports shared-key authentication by providing a shared_key field in a secret. Shared-key authentication can be used with or without TLS. Important The fluentdForward output is only supported if you are using the Fluentd collector. It is not supported if you are using the Vector collector. If you are using the Vector collector, you can forward logs to Fluentd by using the http output. syslog An external log aggregation solution that supports the syslog RFC3164 or RFC5424 protocols. The syslog output can use a UDP, TCP, or TLS connection. cloudwatch Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). cloudlogging Google Cloud Logging, a monitoring and log storage service hosted by Google Cloud Platform (GCP). 9.3. Enabling JSON log forwarding You can configure the Log Forwarding API to parse JSON strings into a structured object. 9.3.1. Parsing JSON logs You can use a ClusterLogForwarder object to parse JSON logs into a structured object and forward them to a supported output. To illustrate how this works, suppose that you have the following structured JSON log entry: Example structured JSON log entry {"level":"info","name":"fred","home":"bedrock"} To enable parsing JSON log, you add parse: json to a pipeline in the ClusterLogForwarder CR, as shown in the following example: Example snippet showing parse: json pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json When you enable parsing JSON logs by using parse: json , the CR copies the JSON-structured log entry in a structured field, as shown in the following example: Example structured output containing the structured JSON log entry {"structured": { "level": "info", "name": "fred", "home": "bedrock" }, "more fields..."} Important If the log entry does not contain valid structured JSON, the structured field is absent. 9.3.2. Configuring JSON log data for Elasticsearch If your JSON logs follow more than one schema, storing them in a single index might cause type conflicts and cardinality problems. To avoid that, you must configure the ClusterLogForwarder custom resource (CR) to group each schema into a single output definition. This way, each schema is forwarded to a separate index. Important If you forward JSON logs to the default Elasticsearch instance managed by OpenShift Logging, it generates new indices based on your configuration. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas. Structure types You can use the following structure types in the ClusterLogForwarder CR to construct index names for the Elasticsearch log store: structuredTypeKey is the name of a message field. The value of that field is used to construct the index name. kubernetes.labels.<key> is the Kubernetes pod label whose value is used to construct the index name. openshift.labels.<key> is the pipeline.label.<key> element in the ClusterLogForwarder CR whose value is used to construct the index name. kubernetes.container_name uses the container name to construct the index name. structuredTypeName : If the structuredTypeKey field is not set or its key is not present, the structuredTypeName value is used as the structured type. When you use both the structuredTypeKey field and the structuredTypeName field together, the structuredTypeName value provides a fallback index name if the key in the structuredTypeKey field is missing from the JSON log data. Note Although you can set the value of structuredTypeKey to any field shown in the "Log Record Fields" topic, the most useful fields are shown in the preceding list of structure types. A structuredTypeKey: kubernetes.labels.<key> example Suppose the following: Your cluster is running application pods that produce JSON logs in two different formats, "apache" and "google". The user labels these application pods with logFormat=apache and logFormat=google . You use the following snippet in your ClusterLogForwarder CR YAML file. apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: # ... outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: - application outputRefs: - default parse: json 2 1 Uses the value of the key-value pair that is formed by the Kubernetes logFormat label. 2 Enables parsing JSON logs. In that case, the following structured log record goes to the app-apache-write index: And the following structured log record goes to the app-google-write index: A structuredTypeKey: openshift.labels.<key> example Suppose that you use the following snippet in your ClusterLogForwarder CR YAML file. outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2 1 Uses the value of the key-value pair that is formed by the OpenShift myLabel label. 2 The myLabel element gives its string value, myValue , to the structured log record. In that case, the following structured log record goes to the app-myValue-write index: Additional considerations The Elasticsearch index for structured records is formed by prepending "app-" to the structured type and appending "-write". Unstructured records are not sent to the structured index. They are indexed as usual in the application, infrastructure, or audit indices. If there is no non-empty structured type, forward an unstructured record with no structured field. It is important not to overload Elasticsearch with too many indices. Only use distinct structured types for distinct log formats , not for each application or namespace. For example, most Apache applications use the same JSON log format and structured type, such as LogApache . 9.3.3. Forwarding JSON logs to the Elasticsearch log store For an Elasticsearch log store, if your JSON log entries follow different schemas , configure the ClusterLogForwarder custom resource (CR) to group each JSON schema into a single output definition. This way, Elasticsearch uses a separate index for each schema. Important Because forwarding different schemas to the same index can cause type conflicts and cardinality problems, you must perform this configuration before you forward data to the Elasticsearch store. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas. Procedure Add the following snippet to your ClusterLogForwarder CR YAML file. outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json Use structuredTypeKey field to specify one of the log record fields. Use structuredTypeName field to specify a name. Important To parse JSON logs, you must set both the structuredTypeKey and structuredTypeName fields. For inputRefs , specify which log types to forward by using that pipeline, such as application, infrastructure , or audit . Add the parse: json element to pipelines. Create the CR object: USD oc create -f <filename>.yaml The Red Hat OpenShift Logging Operator redeploys the collector pods. However, if they do not redeploy, delete the collector pods to force them to redeploy. USD oc delete pod --selector logging-infra=collector 9.3.4. Forwarding JSON logs from containers in the same pod to separate indices You can forward structured logs from different containers within the same pod to different indices. To use this feature, you must configure the pipeline with multi-container support and annotate the pods. Logs are written to indices with a prefix of app- . It is recommended that Elasticsearch be configured with aliases to accommodate this. Important JSON formatting of logs varies by application. Because creating too many indices impacts performance, limit your use of this feature to creating indices for logs that have incompatible JSON formats. Use queries to separate logs from different namespaces, or applications with compatible JSON formats. Prerequisites Logging for Red Hat OpenShift: 5.5 Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat enableStructuredContainerLogs: true 2 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: json 1 Uses the value of the key-value pair that is formed by the Kubernetes logFormat label. 2 Enables multi-container outputs. Create or edit a YAML file that defines the Pod CR object: apiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy 1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy 2 image: heavyimage - name: low image: lowimage 1 Format: containerType.logging.openshift.io/<container-name>: <index> 2 Annotation names must match container names Warning This configuration might significantly increase the number of shards on the cluster. Additional resources Kubernetes Annotations Additional resources About log forwarding 9.4. Configuring log forwarding In a logging deployment, container and infrastructure logs are forwarded to the internal log store defined in the ClusterLogging custom resource (CR) by default. Audit logs are not forwarded to the internal log store by default because this does not provide secure storage. You are responsible for ensuring that the system to which you forward audit logs is compliant with your organizational and governmental regulations, and is properly secured. If this default configuration meets your needs, you do not need to configure a ClusterLogForwarder CR. If a ClusterLogForwarder CR exists, logs are not forwarded to the internal log store unless a pipeline is defined that contains the default output. 9.4.1. About forwarding logs to third-party systems To send logs to specific endpoints inside and outside your Red Hat OpenShift Service on AWS cluster, you specify a combination of outputs and pipelines in a ClusterLogForwarder custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes Secret object. pipeline Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following: application . Container logs generated by user applications running in the cluster, except infrastructure container applications. infrastructure . Container logs from pods that run in the openshift* , kube* , or default projects and journal logs sourced from node file system. audit . Audit logs generated by the node audit system, auditd , Kubernetes API server, OpenShift API server, and OVN network. You can add labels to outbound log messages by using key:value pairs in the pipeline. For example, you might add a label to messages that are forwarded to other data centers or label the logs by type. Labels that are added to objects are also forwarded with the log message. input Forwards the application logs associated with a specific project to a pipeline. In the pipeline, you define which log types to forward using an inputRef parameter and where to forward the logs to using an outputRef parameter. Secret A key:value map that contains confidential data such as user credentials. Note the following: If you do not define a pipeline for a log type, the logs of the undefined types are dropped. For example, if you specify a pipeline for the application and audit types, but do not specify a pipeline for the infrastructure type, infrastructure logs are dropped. You can use multiple types of outputs in the ClusterLogForwarder custom resource (CR) to send logs to servers that support different protocols. The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the my-apps-logs project to the internal Elasticsearch instance. Sample log forwarding outputs and pipelines apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-secure 4 type: "elasticsearch" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 5 type: "elasticsearch" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 6 type: "kafka" url: tls://kafka.secure.com:9093/app-topic inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 8 inputRefs: - audit outputRefs: - elasticsearch-secure - default labels: secure: "true" 9 datacenter: "east" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: "west" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: "south" 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Configuration for an secure Elasticsearch output using a secret with a secure URL. A name to describe the output. The type of output: elasticsearch . The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix. The secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project. 5 Configuration for an insecure Elasticsearch output: A name to describe the output. The type of output: elasticsearch . The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix. 6 Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL: A name to describe the output. The type of output: kafka . Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix. 7 Configuration for an input to filter application logs from the my-project namespace. 8 Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance: A name to describe the pipeline. The inputRefs is the log type, in this example audit . The outputRefs is the name of the output to use, in this example elasticsearch-secure to forward to the secure Elasticsearch instance and default to forward to the internal Elasticsearch instance. Optional: Labels to add to the logs. 9 Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. 10 Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance. 11 Configuration for a pipeline to send logs from the my-project project to the internal Elasticsearch instance. A name to describe the pipeline. The inputRefs is a specific input: my-app-logs . The outputRefs is default . Optional: String. One or more labels to add to the logs. 12 Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name: The inputRefs is the log type, in this example application . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Fluentd log handling when the external log aggregator is unavailable If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. Red Hat OpenShift Service on AWS rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods. Supported Authorization Keys Common key types are provided here. Some output types support additional specialized keys, documented with the output-specific configuration field. All secret keys are optional. Enable the security features you want by setting the relevant keys. You are responsible for creating and maintaining any additional configurations that external destinations might require, such as keys and secrets, service accounts, port openings, or global proxy configuration. Open Shift Logging will not attempt to verify a mismatch between authorization combinations. Transport Layer Security (TLS) Using a TLS URL ( http://... or ssl://... ) without a secret enables basic TLS server-side authentication. Additional TLS features are enabled by including a secret and setting the following optional fields: passphrase : (string) Passphrase to decode an encoded TLS private key. Requires tls.key . ca-bundle.crt : (string) File name of a customer CA for server authentication. Username and Password username : (string) Authentication user name. Requires password . password : (string) Authentication password. Requires username . Simple Authentication Security Layer (SASL) sasl.enable (boolean) Explicitly enable or disable SASL. If missing, SASL is automatically enabled when any of the other sasl. keys are set. sasl.mechanisms : (array) List of allowed SASL mechanism names. If missing or empty, the system defaults are used. sasl.allow-insecure : (boolean) Allow mechanisms that send clear-text passwords. Defaults to false. 9.4.1.1. Creating a Secret You can create a secret in the directory that contains your certificate and key files by using the following command: USD oc create secret generic -n <namespace> <secret_name> \ --from-file=ca-bundle.crt=<your_bundle_file> \ --from-literal=username=<your_username> \ --from-literal=password=<your_password> Note Generic or opaque secrets are recommended for best results. 9.4.2. Creating a log forwarder To create a log forwarder, you must create a ClusterLogForwarder CR that specifies the log input types that the service account can collect. You can also specify which outputs the logs can be forwarded to. If you are using the multi log forwarder feature, you must also reference the service account in the ClusterLogForwarder CR. If you are using the multi log forwarder feature on your cluster, you can create ClusterLogForwarder custom resources (CRs) in any namespace, using any name. If you are using a legacy implementation, the ClusterLogForwarder CR must be named instance , and must be created in the openshift-logging namespace. Important You need administrator permissions for the namespace where you create the ClusterLogForwarder CR. ClusterLogForwarder resource example apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 pipelines: - inputRefs: - <log_type> 4 outputRefs: - <output_name> 5 outputs: - name: <output_name> 6 type: <output_type> 7 url: <log_output_url> 8 # ... 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. 5 7 The type of output that you want to forward logs to. The value of this field can be default , loki , kafka , elasticsearch , fluentdForward , syslog , or cloudwatch . Note The default output type is not supported in mutli log forwarder implementations. 6 A name for the output that you want to forward logs to. 8 The URL of the output that you want to forward logs to. 9.4.3. Tuning log payloads and delivery In logging 5.9 and newer versions, the tuning spec in the ClusterLogForwarder custom resource (CR) provides a means of configuring your deployment to prioritize either throughput or durability of logs. For example, if you need to reduce the possibility of log loss when the collector restarts, or you require collected log messages to survive a collector restart to support regulatory mandates, you can tune your deployment to prioritize log durability. If you use outputs that have hard limitations on the size of batches they can receive, you may want to tune your deployment to prioritize log throughput. Important To use this feature, your logging deployment must be configured to use the Vector collector. The tuning spec in the ClusterLogForwarder CR is not supported when using the Fluentd collector. The following example shows the ClusterLogForwarder CR options that you can modify to tune log forwarder outputs: Example ClusterLogForwarder CR tuning options apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: tuning: delivery: AtLeastOnce 1 compression: none 2 maxWrite: <integer> 3 minRetryDuration: 1s 4 maxRetryDuration: 1s 5 # ... 1 Specify the delivery mode for log forwarding. AtLeastOnce delivery means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash. AtMostOnce delivery means that the log forwarder makes no effort to recover logs lost during a crash. This mode gives better throughput, but may result in greater log loss. 2 Specifying a compression configuration causes data to be compressed before it is sent over the network. Note that not all output types support compression, and if the specified compression type is not supported by the output, this results in an error. The possible values for this configuration are none for no compression, gzip , snappy , zlib , or zstd . lz4 compression is also available if you are using a Kafka output. See the table "Supported compression types for tuning outputs" for more information. 3 Specifies a limit for the maximum payload of a single send operation to the output. 4 Specifies a minimum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds ( ms ), seconds ( s ), or minutes ( m ). 5 Specifies a maximum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds ( ms ), seconds ( s ), or minutes ( m ). Table 9.9. Supported compression types for tuning outputs Compression algorithm Splunk Amazon Cloudwatch Elasticsearch 8 LokiStack Apache Kafka HTTP Syslog Google Cloud Microsoft Azure Monitoring gzip X X X X X snappy X X X X zlib X X X zstd X X X lz4 X 9.4.4. Enabling multi-line exception detection Enables multi-line error detection of container logs. Warning Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. Example java exception To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field, with a value of true . Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-app-logs inputRefs: - application outputRefs: - default detectMultilineErrors: true 9.4.4.1. Details When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message's content is replaced with the concatenated content of all the message fields in the sequence. Table 9.10. Supported languages per collector Language Fluentd Vector Java [✓] [✓] JS [✓] [✓] Ruby [✓] [✓] Python [✓] [✓] Golang [✓] [✓] PHP [✓] [✓] Dart [✓] [✓] 9.4.4.2. Troubleshooting When enabled, the collector configuration will include a new section with type: detect_exceptions Example vector configuration section Example fluentd config section 9.4.5. Forwarding logs to Splunk You can forward logs to the Splunk HTTP Event Collector (HEC) in addition to, or instead of, the internal default Red Hat OpenShift Service on AWS log store. Note Using this feature with Fluentd is not supported. Prerequisites Red Hat OpenShift Logging Operator 5.6 or later A ClusterLogging instance with vector specified as the collector Base64 encoded Splunk HEC token Procedure Create a secret using your Base64 encoded Splunk HEC token. USD oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token> Create or edit the ClusterLogForwarder Custom Resource (CR) using the template below: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: splunk-receiver 4 secret: name: vector-splunk-secret 5 type: splunk 6 url: <http://your.splunk.hec.url:8088> 7 pipelines: 8 - inputRefs: - application - infrastructure name: 9 outputRefs: - splunk-receiver 10 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the name of the secret that contains your HEC token. 6 Specify the output type as splunk . 7 Specify the URL (including port) of your Splunk HEC. 8 Specify which log types to forward by using the pipeline: application , infrastructure , or audit . 9 Optional: Specify a name for the pipeline. 10 Specify the name of the output to use when forwarding logs with this pipeline. 9.4.6. Forwarding logs over HTTP Forwarding logs over HTTP is supported for both the Fluentd and Vector log collectors. To enable, specify http as the output type in the ClusterLogForwarder custom resource (CR). Procedure Create or edit the ClusterLogForwarder CR using the template below: Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: httpout-app type: http url: 4 http: headers: 5 h1: v1 h2: v2 method: POST secret: name: 6 tls: insecureSkipVerify: 7 pipelines: - name: inputRefs: - application outputRefs: - httpout-app 8 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Destination address for logs. 5 Additional headers to send with the log record. 6 Secret name for destination credentials. 7 Values are either true or false . 8 This value should be the same as the output name. 9.4.7. Forwarding to Azure Monitor Logs With logging 5.9 and later, you can forward logs to Azure Monitor Logs in addition to, or instead of, the default log store. This functionality is provided by the Vector Azure Monitor Logs sink . Prerequisites You are familiar with how to administer and create a ClusterLogging custom resource (CR) instance. You are familiar with how to administer and create a ClusterLogForwarder CR instance. You understand the ClusterLogForwarder CR specifications. You have basic familiarity with Azure services. You have an Azure account configured for Azure Portal or Azure CLI access. You have obtained your Azure Monitor Logs primary or the secondary security key. You have determined which log types to forward. To enable log forwarding to Azure Monitor Logs via the HTTP Data Collector API: Create a secret with your shared key: apiVersion: v1 kind: Secret metadata: name: my-secret namespace: openshift-logging type: Opaque data: shared_key: <your_shared_key> 1 1 Must contain a primary or secondary key for the Log Analytics workspace making the request. To obtain a shared key , you can use this command in Azure CLI: Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName "<resource_name>" -Name "<workspace_name>" Create or edit your ClusterLogForwarder CR using the template matching your log selection. Forward all logs apiVersion: "logging.openshift.io/v1" kind: "ClusterLogForwarder" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id 1 logType: my_log_type 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor 1 Unique identifier for the Log Analytics workspace. Required field. 2 Azure record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters. Forward application and infrastructure logs apiVersion: "logging.openshift.io/v1" kind: "ClusterLogForwarder" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor-app type: azureMonitor azureMonitor: customerId: my-customer-id logType: application_log 1 secret: name: my-secret - name: azure-monitor-infra type: azureMonitor azureMonitor: customerId: my-customer-id logType: infra_log # secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor-app - name: infra-pipeline inputRefs: - infrastructure outputRefs: - azure-monitor-infra 1 Azure record type of the data being submitted. May only contain letters, numbers, and underscores (_), and may not exceed 100 characters. Advanced configuration options apiVersion: "logging.openshift.io/v1" kind: "ClusterLogForwarder" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id logType: my_log_type azureResourceId: "/subscriptions/111111111" 1 host: "ods.opinsights.azure.com" 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor 1 Resource ID of the Azure resource the data should be associated with. Optional field. 2 Alternative host for dedicated Azure regions. Optional field. Default value is ods.opinsights.azure.com . Default value for Azure Government is ods.opinsights.azure.us . 9.4.8. Forwarding application logs from specific projects You can forward a copy of the application logs from specific projects to an external log aggregator, in addition to, or instead of, using the internal log store. You must also configure the external log aggregator to receive log data from Red Hat OpenShift Service on AWS. To configure forwarding application logs from a project, you must create a ClusterLogForwarder custom resource (CR) with at least one input from a project, optional outputs for other log aggregators, and pipelines that use those inputs and outputs. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR: Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project 8 pipelines: - name: forward-to-fluentd-insecure 9 inputRefs: 10 - my-app-logs outputRefs: 11 - fluentd-server-insecure labels: project: "my-project" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application 14 - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: "C1234" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 The name of the output. 4 The output type: elasticsearch , fluentdForward , syslog , or kafka . 5 The URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 6 If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and have tls.crt , tls.key , and ca-bundle.crt keys that each point to the certificates they represent. 7 The configuration for an input to filter application logs from the specified projects. 8 If no namespace is specified, logs are collected from all namespaces. 9 The pipeline configuration directs logs from a named input to a named output. In this example, a pipeline named forward-to-fluentd-insecure forwards logs from an input named my-app-logs to an output named fluentd-server-insecure . 10 A list of inputs. 11 The name of the output to use. 12 Optional: String. One or more labels to add to the logs. 13 Configuration for a pipeline to send logs to other log aggregators. Optional: Specify a name for the pipeline. Specify which log types to forward by using the pipeline: application, infrastructure , or audit . Specify the name of the output to use when forwarding logs with this pipeline. Optional: Specify the default output to forward logs to the default log store. Optional: String. One or more labels to add to the logs. 14 Note that application logs from all namespaces are collected when using this configuration. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 9.4.9. Forwarding application logs from specific pods As a cluster administrator, you can use Kubernetes pod labels to gather log data from specific pods and forward it to a log collector. Suppose that you have an application composed of pods running alongside other pods in various namespaces. If those pods have labels that identify the application, you can gather and output their log data to a specific log collector. To specify the pod labels, you use one or more matchLabels key-value pairs. If you specify multiple key-value pairs, the pods must match all of them to be selected. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the pod labels using simple equality-based selectors under inputs[].name.application.selector.matchLabels , as shown in the following example. Example ClusterLogForwarder CR YAML file apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 inputs: 5 - name: myAppLogData application: selector: matchLabels: 6 environment: production app: nginx namespaces: 7 - app1 - app2 outputs: 8 - <output_name> ... 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 Specify one or more comma-separated values from inputs[].name . 4 Specify one or more comma-separated values from outputs[] . 5 Define a unique inputs[].name for each application that has a unique set of pod labels. 6 Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs. 7 Optional: Specify one or more namespaces. 8 Specify one or more outputs to forward your log data to. Optional: To restrict the gathering of log data to specific namespaces, use inputs[].name.application.namespaces , as shown in the preceding example. Optional: You can send log data from additional applications that have different pod labels to the same pipeline. For each unique combination of pod labels, create an additional inputs[].name section similar to the one shown. Update the selectors to match the pod labels of this application. Add the new inputs[].name value to inputRefs . For example: Create the CR object: USD oc create -f <file-name>.yaml Additional resources For more information on matchLabels in Kubernetes, see Resources that support set-based requirements . 9.4.10. Overview of API audit filter OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, checking stops at the first match. How much data is included in an event is determined by the value of the level field: None : The event is dropped. Metadata : Audit metadata is included, request and response bodies are removed. Request : Audit metadata and the request body are included, the response body is removed. RequestResponse : All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. Note You can use this feature only if the Vector collector is set up in your logging deployment. In logging 5.8 and later, the ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy , while providing the following additional functions: Wildcards Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, namespace openshift-\* matches openshift-apiserver or openshift-authentication . Resource \*/status matches Pod/status or Deployment/status . Default Rules Events that do not match any rule in the policy are filtered as follows: Read-only system events such as get , list , watch are dropped. Service account write events that occur within the same namespace as the service account are dropped. All other events are forwarded, subject to any configured rate limits. To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. Omit Response Codes A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, a list of HTTP status code for which no events are created. The default value is [404, 409, 422, 429] . If the value is an empty list, [] , then no status codes are omitted. The ClusterLogForwarder CR audit policy acts in addition to the Red Hat OpenShift Service on AWS audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards, and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store, and a less detailed stream to a remote site. Note The example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. Example audit policy apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 outputRefs: default filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata 1 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. 2 The name of your audit policy. Additional resources Logging for egress firewall and network policy rules 9.4.11. Forwarding logs to an external Loki logging system You can forward logs to an external Loki logging system in addition to, or instead of, the default log store. To configure log forwarding to Loki, you must create a ClusterLogForwarder custom resource (CR) with an output to Loki, and a pipeline that uses the output. The output to Loki can use the HTTP (insecure) or HTTPS (secure HTTP) connection. Prerequisites You must have a Loki logging system running at the URL you specify with the url field in the CR. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: loki-insecure 4 type: "loki" 5 url: http://loki.insecure.com:3100 6 loki: tenantKey: kubernetes.namespace_name labelKeys: - kubernetes.labels.foo - name: loki-secure 7 type: "loki" url: https://loki.secure.com:3100 secret: name: loki-secret 8 loki: tenantKey: kubernetes.namespace_name 9 labelKeys: - kubernetes.labels.foo 10 pipelines: - name: application-logs 11 inputRefs: 12 - application - audit outputRefs: 13 - loki-secure 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the type as "loki" . 6 Specify the URL and port of the Loki system as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. Loki's default port for HTTP(S) communication is 3100. 7 For a secure connection, you can specify an https or http URL that you authenticate by specifying a secret . 8 For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificates it represents. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. In legacy implementations, the secret must exist in the openshift-logging project. For more information, see the following "Example: Setting a secret that contains a username and password." 9 Optional: Specify a metadata key field to generate values for the TenantID field in Loki. For example, setting tenantKey: kubernetes.namespace_name uses the names of the Kubernetes namespaces as values for tenant IDs in Loki. To see which other log record fields you can specify, see the "Log Record Fields" link in the following "Additional resources" section. 10 Optional: Specify a list of metadata field keys to replace the default Loki labels. Loki label names must match the regular expression [a-zA-Z_:][a-zA-Z0-9_:]* . Illegal characters in metadata keys are replaced with _ to form the label name. For example, the kubernetes.labels.foo metadata key becomes Loki label kubernetes_labels_foo . If you do not set labelKeys , the default value is: [log_type, kubernetes.namespace_name, kubernetes.pod_name, kubernetes_host] . Keep the set of labels small because Loki limits the size and number of labels allowed. See Configuring Loki, limits_config . You can still query based on any log record field using query filters. 11 Optional: Specify a name for the pipeline. 12 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 13 Specify the name of the output to use when forwarding logs with this pipeline. Note Because Loki requires log streams to be correctly ordered by timestamp, labelKeys always includes the kubernetes_host label set, even if you do not specify it. This inclusion ensures that each stream originates from a single host, which prevents timestamps from becoming disordered due to clock differences on different hosts. Apply the ClusterLogForwarder CR object by running the following command: USD oc apply -f <filename>.yaml Additional resources Configuring Loki server 9.4.12. Forwarding logs to an external Elasticsearch instance You can forward logs to an external Elasticsearch instance in addition to, or instead of, the internal log store. You are responsible for configuring the external log aggregator to receive log data from Red Hat OpenShift Service on AWS. To configure log forwarding to an external Elasticsearch instance, you must create a ClusterLogForwarder custom resource (CR) with an output to that instance, and a pipeline that uses the output. The external Elasticsearch output can use the HTTP (insecure) or HTTPS (secure HTTP) connection. To forward logs to both an external and the internal Elasticsearch instance, create outputs and pipelines to the external instance and a pipeline that uses the default output to forward logs to the internal instance. Note If you only want to forward logs to an internal Elasticsearch instance, you do not need to create a ClusterLogForwarder CR. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR: Example ClusterLogForwarder CR apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-example 4 type: elasticsearch 5 elasticsearch: version: 8 6 url: http://elasticsearch.example.com:9200 7 secret: name: es-secret 8 pipelines: - name: application-logs 9 inputRefs: 10 - application - audit outputRefs: - elasticsearch-example 11 - default 12 labels: myLabel: "myValue" 13 # ... 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the elasticsearch type. 6 Specify the Elasticsearch version. This can be 6 , 7 , or 8 . 7 Specify the URL and port of the external Elasticsearch instance as a valid absolute URL. You can use the http (insecure) or https (secure HTTP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP Address. 8 For an https prefix, specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. Otherwise, for http and https prefixes, you can specify a secret that contains a username and password. In legacy implementations, the secret must exist in the openshift-logging project. For more information, see the following "Example: Setting a secret that contains a username and password." 9 Optional: Specify a name for the pipeline. 10 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 11 Specify the name of the output to use when forwarding logs with this pipeline. 12 Optional: Specify the default output to send the logs to the internal Elasticsearch instance. 13 Optional: String. One or more labels to add to the logs. Apply the ClusterLogForwarder CR: USD oc apply -f <filename>.yaml Example: Setting a secret that contains a username and password You can use a secret that contains a username and password to authenticate a secure connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) keys because a third party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. Create a Secret YAML file similar to the following example. Use base64-encoded values for the username and password fields. The secret type is opaque by default. apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password> # ... Create the secret: USD oc create secret -n openshift-logging openshift-test-secret.yaml Specify the name of the secret in the ClusterLogForwarder CR: kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: "elasticsearch" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret # ... Note In the value of the url field, the prefix can be http or https . Apply the CR object: USD oc apply -f <filename>.yaml 9.4.13. Forwarding logs using the Fluentd forward protocol You can use the Fluentd forward protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator to receive the logs from Red Hat OpenShift Service on AWS. To configure log forwarding using the forward protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the Fluentd servers, and pipelines that use those outputs. The Fluentd output can use a TCP (insecure) or TLS (secure TCP) connection. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 labels: clusterId: "C1234" 11 - name: forward-to-fluentd-insecure 12 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: "C1234" 1 The name of the ClusterLogForwarder CR must be instance . 2 The namespace for the ClusterLogForwarder CR must be openshift-logging . 3 Specify a name for the output. 4 Specify the fluentdForward type. 5 Specify the URL and port of the external Fluentd instance as a valid absolute URL. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 6 If you are using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the openshift-logging project and must contain a ca-bundle.crt key that points to the certificate it represents. 7 Optional: Specify a name for the pipeline. 8 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 9 Specify the name of the output to use when forwarding logs with this pipeline. 10 Optional: Specify the default output to forward logs to the internal Elasticsearch instance. 11 Optional: String. One or more labels to add to the logs. 12 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: A name to describe the pipeline. The inputRefs is the log type to forward by using the pipeline: application, infrastructure , or audit . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Create the CR object: USD oc create -f <file-name>.yaml 9.4.13.1. Enabling nanosecond precision for Logstash to ingest data from fluentd For Logstash to ingest log data from fluentd, you must enable nanosecond precision in the Logstash configuration file. Procedure In the Logstash configuration file, set nanosecond_precision to true . Example Logstash configuration file input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } } 9.4.14. Forwarding logs using the syslog protocol You can use the syslog RFC3164 or RFC5424 protocol to send a copy of your logs to an external log aggregator that is configured to accept the protocol instead of, or in addition to, the default Elasticsearch log store. You are responsible for configuring the external log aggregator, such as a syslog server, to receive the logs from Red Hat OpenShift Service on AWS. To configure log forwarding using the syslog protocol, you must create a ClusterLogForwarder custom resource (CR) with one or more outputs to the syslog servers, and pipelines that use those outputs. The syslog output can use a UDP, TCP, or TLS connection. Prerequisites You must have a logging server that is configured to receive the logging data using the specified protocol or format. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: rsyslog-east 4 type: syslog 5 syslog: 6 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 7 secret: 8 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'tcp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 9 inputRefs: 10 - audit - application outputRefs: 11 - rsyslog-east - default 12 labels: secure: "true" 13 syslog: "east" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: "west" 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the syslog type. 6 Optional: Specify the syslog parameters, listed below. 7 Specify the URL and port of the external syslog instance. You can use the udp (insecure), tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 8 If using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. In legacy implementations, the secret must exist in the openshift-logging project. 9 Optional: Specify a name for the pipeline. 10 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 11 Specify the name of the output to use when forwarding logs with this pipeline. 12 Optional: Specify the default output to forward logs to the internal Elasticsearch instance. 13 Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. 14 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: A name to describe the pipeline. The inputRefs is the log type to forward by using the pipeline: application, infrastructure , or audit . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Create the CR object: USD oc create -f <filename>.yaml 9.4.14.1. Adding log source information to message output You can add namespace_name , pod_name , and container_name elements to the message field of the record by adding the AddLogSource field to your ClusterLogForwarder custom resource (CR). spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout Note This configuration is compatible with both RFC3164 and RFC5424. Example syslog message output without AddLogSource <15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {"msgcontent"=>"Message Contents", "timestamp"=>"2020-11-15 17:06:09", "tag_key"=>"rec_tag", "index"=>56} Example syslog message output with AddLogSource <15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={"msgcontent":"My life is my message", "timestamp":"2020-11-16 10:49:36", "tag_key":"rec_tag", "index":76} 9.4.14.2. Syslog parameters You can configure the following for the syslog outputs. For more information, see the syslog RFC3164 or RFC5424 RFC. facility: The syslog facility . The value can be a decimal integer or a case-insensitive keyword: 0 or kern for kernel messages 1 or user for user-level messages, the default. 2 or mail for the mail system 3 or daemon for system daemons 4 or auth for security/authentication messages 5 or syslog for messages generated internally by syslogd 6 or lpr for the line printer subsystem 7 or news for the network news subsystem 8 or uucp for the UUCP subsystem 9 or cron for the clock daemon 10 or authpriv for security authentication messages 11 or ftp for the FTP daemon 12 or ntp for the NTP subsystem 13 or security for the syslog audit log 14 or console for the syslog alert log 15 or solaris-cron for the scheduling daemon 16 - 23 or local0 - local7 for locally used facilities Optional: payloadKey : The record field to use as payload for the syslog message. Note Configuring the payloadKey parameter prevents other parameters from being forwarded to the syslog. rfc: The RFC to be used for sending logs using syslog. The default is RFC5424. severity: The syslog severity to set on outgoing syslog records. The value can be a decimal integer or a case-insensitive keyword: 0 or Emergency for messages indicating the system is unusable 1 or Alert for messages indicating action must be taken immediately 2 or Critical for messages indicating critical conditions 3 or Error for messages indicating error conditions 4 or Warning for messages indicating warning conditions 5 or Notice for messages indicating normal but significant conditions 6 or Informational for messages indicating informational messages 7 or Debug for messages indicating debug-level messages, the default tag: Tag specifies a record field to use as a tag on the syslog message. trimPrefix: Remove the specified prefix from the tag. 9.4.14.3. Additional RFC5424 syslog parameters The following parameters apply to RFC5424: appName: The APP-NAME is a free-text string that identifies the application that sent the log. Must be specified for RFC5424 . msgID: The MSGID is a free-text string that identifies the type of message. Must be specified for RFC5424 . procID: The PROCID is a free-text string. A change in the value indicates a discontinuity in syslog reporting. Must be specified for RFC5424 . 9.4.15. Forwarding logs to a Kafka broker You can forward logs to an external Kafka broker in addition to, or instead of, the default log store. To configure log forwarding to an external Kafka instance, you must create a ClusterLogForwarder custom resource (CR) with an output to that instance, and a pipeline that uses the output. You can include a specific Kafka topic in the output or use the default. The Kafka output can use a TCP (insecure) or TLS (secure TCP) connection. Procedure Create or edit a YAML file that defines the ClusterLogForwarder CR object: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: app-logs 4 type: kafka 5 url: tls://kafka.example.devlab.com:9093/app-topic 6 secret: name: kafka-secret 7 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic 8 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic 9 inputRefs: 10 - application outputRefs: 11 - app-logs labels: logType: "application" 12 - name: infra-topic 13 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: "infra" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs labels: logType: "audit" 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the kafka type. 6 Specify the URL and port of the Kafka broker as a valid absolute URL, optionally with a specific topic. You can use the tcp (insecure) or tls (secure TCP) protocol. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address. 7 If you are using a tls prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must contain a ca-bundle.crt key that points to the certificate it represents. In legacy implementations, the secret must exist in the openshift-logging project. 8 Optional: To send an insecure output, use a tcp prefix in front of the URL. Also omit the secret key and its name from this output. 9 Optional: Specify a name for the pipeline. 10 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 11 Specify the name of the output to use when forwarding logs with this pipeline. 12 Optional: String. One or more labels to add to the logs. 13 Optional: Configure multiple outputs to forward logs to other external log aggregators of any supported type: A name to describe the pipeline. The inputRefs is the log type to forward by using the pipeline: application, infrastructure , or audit . The outputRefs is the name of the output to use. Optional: String. One or more labels to add to the logs. Optional: To forward a single output to multiple Kafka brokers, specify an array of Kafka brokers as shown in the following example: # ... spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka: 1 brokers: 2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic 3 # ... 1 Specify a kafka key that has a brokers and topic key. 2 Use the brokers key to specify an array of one or more brokers. 3 Use the topic key to specify the target topic that receives the logs. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 9.4.16. Forwarding logs to Amazon CloudWatch You can forward logs to Amazon CloudWatch, a monitoring and log storage service hosted by Amazon Web Services (AWS). You can forward logs to CloudWatch in addition to, or instead of, the default log store. To configure log forwarding to CloudWatch, you must create a ClusterLogForwarder custom resource (CR) with an output for CloudWatch, and a pipeline that uses the output. Procedure Create a Secret YAML file that uses the aws_access_key_id and aws_secret_access_key fields to specify your base64-encoded AWS credentials. For example: apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= Create the secret. For example: USD oc apply -f cw-secret.yaml Create or edit a YAML file that defines the ClusterLogForwarder CR object. In the file, specify the name of the secret. For example: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: cw-secret 9 pipelines: - name: infra-logs 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the cloudwatch type. 6 Optional: Specify how to group the logs: logType creates log groups for each log type. namespaceName creates a log group for each application name space. It also creates separate log groups for infrastructure and audit logs. namespaceUUID creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs. 7 Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups. 8 Specify the AWS region. 9 Specify the name of the secret that contains your AWS credentials. 10 Optional: Specify a name for the pipeline. 11 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 12 Specify the name of the output to use when forwarding logs with this pipeline. Create the CR object: USD oc create -f <file-name>.yaml Example: Using ClusterLogForwarder with Amazon CloudWatch Here, you see an example ClusterLogForwarder custom resource (CR) and the log data that it outputs to Amazon CloudWatch. Suppose that you are running a ROSA cluster named mycluster . The following command returns the cluster's infrastructureName , which you will use to compose aws commands later on: USD oc get Infrastructure/cluster -ojson | jq .status.infrastructureName "mycluster-7977k" To generate log data for this example, you run a busybox pod in a namespace called app . The busybox pod writes a message to stdout every three seconds: USD oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done' USD oc logs -f busybox My life is my message My life is my message My life is my message ... You can look up the UUID of the app namespace where the busybox pod runs: USD oc get ns/app -ojson | jq .metadata.uid "794e1e1a-b9f5-4958-a190-e76a9b53d7bf" In your ClusterLogForwarder custom resource (CR), you configure the infrastructure , audit , and application log types as inputs to the all-logs pipeline. You also connect this pipeline to cw output, which forwards the logs to a CloudWatch instance in the us-east-2 region: apiVersion: "logging.openshift.io/v1" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw Each region in CloudWatch contains three levels of objects: log group log stream log event With groupBy: logType in the ClusterLogForwarding CR, the three log types in the inputRefs produce three log groups in Amazon Cloudwatch: USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "mycluster-7977k.application" "mycluster-7977k.audit" "mycluster-7977k.infrastructure" Each of the log groups contains log streams: USD aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName "kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log" USD aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName "ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log" "ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log" "ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log" ... USD aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log" "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log" "ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log" ... Each log stream contains log events. To see a log event from the busybox Pod, you specify its log stream from the application log group: USD aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { "events": [ { "timestamp": 1629422704178, "message": "{\"docker\":{\"container_id\":\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\"},\"kubernetes\":{\"container_name\":\"busybox\",\"namespace_name\":\"app\",\"pod_name\":\"busybox\",\"container_image\":\"docker.io/library/busybox:latest\",\"container_image_id\":\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\",\"pod_id\":\"870be234-90a3-4258-b73f-4f4d6e2777c7\",\"host\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"labels\":{\"run\":\"busybox\"},\"master_url\":\"https://kubernetes.default.svc\",\"namespace_id\":\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\",\"namespace_labels\":{\"kubernetes_io/metadata_name\":\"app\"}},\"message\":\"My life is my message\",\"level\":\"unknown\",\"hostname\":\"ip-10-0-216-3.us-east-2.compute.internal\",\"pipeline_metadata\":{\"collector\":{\"ipaddr4\":\"10.0.216.3\",\"inputname\":\"fluent-plugin-systemd\",\"name\":\"fluentd\",\"received_at\":\"2021-08-20T01:25:08.085760+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-20T01:25:04.178986+00:00\",\"viaq_index_name\":\"app-write\",\"viaq_msg_id\":\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\",\"log_type\":\"application\",\"time\":\"2021-08-20T01:25:04+00:00\"}", "ingestionTime": 1629422744016 }, ... Example: Customizing the prefix in log group names In the log group names, you can replace the default infrastructureName prefix, mycluster-7977k , with an arbitrary string like demo-group-prefix . To make this change, you update the groupPrefix field in the ClusterLogForwarding CR: cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2 The value of groupPrefix replaces the default infrastructureName prefix: USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "demo-group-prefix.application" "demo-group-prefix.audit" "demo-group-prefix.infrastructure" Example: Naming log groups after application namespace names For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the name of the application namespace. If you delete an application namespace object and create a new one that has the same name, CloudWatch continues using the same log group as before. If you consider successive application namespace objects that have the same name as equivalent to each other, use the approach described in this example. Otherwise, if you need to distinguish the resulting log groups from each other, see the following "Naming log groups for application namespace UUIDs" section instead. To create application log groups whose names are based on the names of the application namespaces, you set the value of the groupBy field to namespaceName in the ClusterLogForwarder CR: cloudwatch: groupBy: namespaceName region: us-east-2 Setting groupBy to namespaceName affects the application log group only. It does not affect the audit and infrastructure log groups. In Amazon Cloudwatch, the namespace name appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.app log group instead of mycluster-7977k.application : USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "mycluster-7977k.app" "mycluster-7977k.audit" "mycluster-7977k.infrastructure" If the cluster in this example had contained multiple application namespaces, the output would show multiple log groups, one for each namespace. The groupBy field affects the application log group only. It does not affect the audit and infrastructure log groups. Example: Naming log groups after application namespace UUIDs For each application namespace in your cluster, you can create a log group in CloudWatch whose name is based on the UUID of the application namespace. If you delete an application namespace object and create a new one, CloudWatch creates a new log group. If you consider successive application namespace objects with the same name as different from each other, use the approach described in this example. Otherwise, see the preceding "Example: Naming log groups for application namespace names" section instead. To name log groups after application namespace UUIDs, you set the value of the groupBy field to namespaceUUID in the ClusterLogForwarder CR: cloudwatch: groupBy: namespaceUUID region: us-east-2 In Amazon Cloudwatch, the namespace UUID appears at the end of each log group name. Because there is a single application namespace, "app", the following output shows a new mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf log group instead of mycluster-7977k.application : USD aws --output json logs describe-log-groups | jq .logGroups[].logGroupName "mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf" // uid of the "app" namespace "mycluster-7977k.audit" "mycluster-7977k.infrastructure" The groupBy field affects the application log group only. It does not affect the audit and infrastructure log groups. 9.4.17. Creating a secret for AWS CloudWatch with an existing AWS role If you have an existing role for AWS, you can create a secret for AWS with STS using the oc create secret --from-literal command. Procedure In the CLI, enter the following to generate a secret for AWS: USD oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions Example Secret apiVersion: v1 kind: Secret metadata: namespace: openshift-logging name: my-secret-name stringData: role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions 9.4.18. Forwarding logs to Amazon CloudWatch from STS enabled clusters For clusters with AWS Security Token Service (STS) enabled, you must create the AWS IAM roles and policies that enable log forwarding, and a ClusterLogForwarder custom resource (CR) with an output for CloudWatch. Prerequisites Logging for Red Hat OpenShift: 5.5 and later Procedure Prepare the AWS account: Create an IAM policy JSON file with the following content: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogGroups", "logs:DescribeLogStreams", "logs:PutLogEvents", "logs:PutRetentionPolicy" ], "Resource": "arn:aws:logs:*:*:*" } ] } Create an IAM trust JSON file with the following content: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::<your_aws_account_id>:oidc-provider/<openshift_oidc_provider>" 1 }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "<openshift_oidc_provider>:sub": "system:serviceaccount:openshift-logging:logcollector" 2 } } } ] } 1 Specify your AWS account ID and the OpenShift OIDC provider endpoint. Obtain the endpoint by running the following command: USD rosa describe cluster \ -c USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}') \ -o yaml | awk '/oidc_endpoint_url/ {print USD2}' | cut -d '/' -f 3,4 2 Specify the OpenShift OIDC endpoint again. Create the IAM role: USD aws iam create-role --role-name "<your_rosa_cluster_name>-RosaCloudWatch" \ --assume-role-policy-document file://<your_trust_file_name>.json \ --query Role.Arn \ --output text Save the output. You will use it in the steps. Create the IAM policy: USD aws iam create-policy \ --policy-name "RosaCloudWatch" \ --policy-document file:///<your_policy_file_name>.json \ --query Policy.Arn \ --output text Save the output. You will use it in the steps. Attach the IAM policy to the IAM role: USD aws iam attach-role-policy \ --role-name "<your_rosa_cluster_name>-RosaCloudWatch" \ --policy-arn <policy_ARN> 1 1 Replace policy_ARN with the output you saved while creating the policy. Create a Secret YAML file for the Red Hat OpenShift Logging Operator: apiVersion: v1 kind: Secret metadata: name: cloudwatch-credentials namespace: openshift-logging stringData: credentials: |- [default] sts_regional_endpoints = regional role_arn: <role_ARN> 1 web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token 1 Replace role_ARN with the output you saved while creating the role. Create the secret: USD oc apply -f cloudwatch-credentials.yaml Create or edit a ClusterLogForwarder custom resource: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: <your_secret_name> 9 pipelines: - name: to-cloudwatch 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12 1 In legacy implementations, the CR name must be instance . In multi log forwarder implementations, you can use any name. 2 In legacy implementations, the CR namespace must be openshift-logging . In multi log forwarder implementations, you can use any namespace. 3 The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the openshift-logging namespace. 4 Specify a name for the output. 5 Specify the cloudwatch type. 6 Optional: Specify how to group the logs: logType creates log groups for each log type namespaceName creates a log group for each application name space. Infrastructure and audit logs are unaffected, remaining grouped by logType . namespaceUUID creates a new log groups for each application namespace UUID. It also creates separate log groups for infrastructure and audit logs. 7 Optional: Specify a string to replace the default infrastructureName prefix in the names of the log groups. 8 Specify the AWS region. 9 Specify the name of the secret you created previously. 10 Optional: Specify a name for the pipeline. 11 Specify which log types to forward by using the pipeline: application, infrastructure , or audit . 12 Specify the name of the output to use when forwarding logs with this pipeline. Additional resources AWS STS API Reference 9.5. Configuring the logging collector Logging for Red Hat OpenShift collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata. All supported modifications to the log collector can be performed though the spec.collection stanza in the ClusterLogging custom resource (CR). 9.5.1. Configuring the log collector You can configure which log collector type your logging uses by modifying the ClusterLogging custom resource (CR). Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator. You have created a ClusterLogging CR. Procedure Modify the ClusterLogging CR collection spec: ClusterLogging CR example apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: # ... spec: # ... collection: type: <log_collector_type> 1 resources: {} tolerations: {} # ... 1 The log collector type you want to use for the logging. This can be vector or fluentd . Apply the ClusterLogging CR by running the following command: USD oc apply -f <filename>.yaml 9.5.2. Creating a LogFileMetricExporter resource In logging version 5.8 and newer versions, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers. If you do not create the LogFileMetricExporter CR, you may see a No datapoints found message in the Red Hat OpenShift Service on AWS web console dashboard for Produced Logs . Prerequisites You have administrator permissions. You have installed the Red Hat OpenShift Logging Operator. You have installed the OpenShift CLI ( oc ). Procedure Create a LogFileMetricExporter CR as a YAML file: Example LogFileMetricExporter CR apiVersion: logging.openshift.io/v1alpha1 kind: LogFileMetricExporter metadata: name: instance namespace: openshift-logging spec: nodeSelector: {} 1 resources: 2 limits: cpu: 500m memory: 256Mi requests: cpu: 200m memory: 128Mi tolerations: [] 3 # ... 1 Optional: The nodeSelector stanza defines which nodes the pods are scheduled on. 2 The resources stanza defines resource requirements for the LogFileMetricExporter CR. 3 Optional: The tolerations stanza defines the tolerations that the pods accept. Apply the LogFileMetricExporter CR by running the following command: USD oc apply -f <filename>.yaml Verification A logfilesmetricexporter pod runs concurrently with a collector pod on each node. Verify that the logfilesmetricexporter pods are running in the namespace where you have created the LogFileMetricExporter CR, by running the following command and observing the output: USD oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging Example output NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s 9.5.3. Configure log collector CPU and memory limits The log collector allows for adjustments to both the CPU and memory limits. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: fluentd resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi # ... 1 Specify the CPU and memory limits and requests as needed. The values shown are the default values. 9.5.4. Configuring input receivers The Red Hat OpenShift Logging Operator deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. The service name is generated based on the following: For multi log forwarder ClusterLogForwarder CR deployments, the service name is in the format <ClusterLogForwarder_CR_name>-<input_name> . For example, example-http-receiver . For legacy ClusterLogForwarder CR deployments, meaning those named instance and located in the openshift-logging namespace, the service name is in the format collector-<input_name> . For example, collector-http-receiver . 9.5.4.1. Configuring the collector to receive audit logs as an HTTP server You can configure your log collector to listen for HTTP connections and receive audit logs as an HTTP server by specifying http as a receiver input in the ClusterLogForwarder custom resource (CR). This enables you to use a common log store for audit logs that are collected from both inside and outside of your Red Hat OpenShift Service on AWS cluster. Prerequisites You have administrator permissions. You have installed the OpenShift CLI ( oc ). You have installed the Red Hat OpenShift Logging Operator. You have created a ClusterLogForwarder CR. Procedure Modify the ClusterLogForwarder CR to add configuration for the http receiver input: Example ClusterLogForwarder CR if you are using a multi log forwarder deployment apiVersion: logging.openshift.io/v1beta1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccountName: <service_account_name> inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - name: http-pipeline inputRefs: - http-receiver # ... 1 Specify a name for your input receiver. 2 Specify the input receiver type as http . 3 Currently, only the kube-apiserver webhook format is supported for http input receivers. 4 Optional: Specify the port that the input receiver listens on. This must be a value between 1024 and 65535 . The default value is 8443 if this is not specified. 5 Configure a pipeline for your input receiver. Example ClusterLogForwarder CR if you are using a legacy deployment apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - inputRefs: - http-receiver name: http-pipeline # ... 1 Specify a name for your input receiver. 2 Specify the input receiver type as http . 3 Currently, only the kube-apiserver webhook format is supported for http input receivers. 4 Optional: Specify the port that the input receiver listens on. This must be a value between 1024 and 65535 . The default value is 8443 if this is not specified. 5 Configure a pipeline for your input receiver. Apply the changes to the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml Additional resources Overview of API audit filter 9.5.5. Advanced configuration for the Fluentd log forwarder Note Fluentd is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to Fluentd, you can use Vector instead. Logging includes multiple Fluentd parameters that you can use for tuning the performance of the Fluentd log forwarder. With these parameters, you can change the following Fluentd behaviors: Chunk and chunk buffer sizes Chunk flushing behavior Chunk forwarding retry behavior Fluentd collects log data in a single blob called a chunk . When Fluentd creates a chunk, the chunk is considered to be in the stage , where the chunk gets filled with data. When the chunk is full, Fluentd moves the chunk to the queue , where chunks are held before being flushed, or written out to their destination. Fluentd can fail to flush a chunk for a number of reasons, such as network issues or capacity issues at the destination. If a chunk cannot be flushed, Fluentd retries flushing as configured. By default in Red Hat OpenShift Service on AWS, Fluentd uses the exponential backoff method to retry flushing, where Fluentd doubles the time it waits between attempts to retry flushing again, which helps reduce connection requests to the destination. You can disable exponential backoff and use the periodic retry method instead, which retries flushing the chunks at a specified interval. These parameters can help you determine the trade-offs between latency and throughput. To optimize Fluentd for throughput, you could use these parameters to reduce network packet count by configuring larger buffers and queues, delaying flushes, and setting longer times between retries. Be aware that larger buffers require more space on the node file system. To optimize for low latency, you could use the parameters to send data as soon as possible, avoid the build-up of batches, have shorter queues and buffers, and use more frequent flush and retries. You can configure the chunking and flushing behavior using the following parameters in the ClusterLogging custom resource (CR). The parameters are then automatically added to the Fluentd config map for use by Fluentd. Note These parameters are: Not relevant to most users. The default settings should give good general performance. Only for advanced users with detailed knowledge of Fluentd configuration and performance. Only for performance tuning. They have no effect on functional aspects of logging. Table 9.11. Advanced Fluentd Configuration Parameters Parameter Description Default chunkLimitSize The maximum size of each chunk. Fluentd stops writing data to a chunk when it reaches this size. Then, Fluentd sends the chunk to the queue and opens a new chunk. 8m totalLimitSize The maximum size of the buffer, which is the total size of the stage and the queue. If the buffer size exceeds this value, Fluentd stops adding data to chunks and fails with an error. All data not in chunks is lost. Approximately 15% of the node disk distributed across all outputs. flushInterval The interval between chunk flushes. You can use s (seconds), m (minutes), h (hours), or d (days). 1s flushMode The method to perform flushes: lazy : Flush chunks based on the timekey parameter. You cannot modify the timekey parameter. interval : Flush chunks based on the flushInterval parameter. immediate : Flush chunks immediately after data is added to a chunk. interval flushThreadCount The number of threads that perform chunk flushing. Increasing the number of threads improves the flush throughput, which hides network latency. 2 overflowAction The chunking behavior when the queue is full: throw_exception : Raise an exception to show in the log. block : Stop data chunking until the full buffer issue is resolved. drop_oldest_chunk : Drop the oldest chunk to accept new incoming chunks. Older chunks have less value than newer chunks. block retryMaxInterval The maximum time in seconds for the exponential_backoff retry method. 300s retryType The retry method when flushing fails: exponential_backoff : Increase the time between flush retries. Fluentd doubles the time it waits until the retry until the retry_max_interval parameter is reached. periodic : Retries flushes periodically, based on the retryWait parameter. exponential_backoff retryTimeOut The maximum time interval to attempt retries before the record is discarded. 60m retryWait The time in seconds before the chunk flush. 1s For more information on the Fluentd chunk lifecycle, see Buffer Plugins in the Fluentd documentation. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance Add or modify any of the following parameters: apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: "300s" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9 # ... 1 Specify the maximum size of each chunk before it is queued for flushing. 2 Specify the interval between chunk flushes. 3 Specify the method to perform chunk flushes: lazy , interval , or immediate . 4 Specify the number of threads to use for chunk flushes. 5 Specify the chunking behavior when the queue is full: throw_exception , block , or drop_oldest_chunk . 6 Specify the maximum interval in seconds for the exponential_backoff chunk flushing method. 7 Specify the retry type when chunk flushing fails: exponential_backoff or periodic . 8 Specify the time in seconds before the chunk flush. 9 Specify the maximum size of the chunk buffer. Verify that the Fluentd pods are redeployed: USD oc get pods -l component=collector -n openshift-logging Check that the new values are in the fluentd config map: USD oc extract configmap/collector-config --confirm Example fluentd.conf <buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size "#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}" total_limit_size "#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}" chunk_limit_size 8m overflow_action throw_exception disable_chunk_backup true </buffer> 9.6. Collecting and storing Kubernetes events The Red Hat OpenShift Service on AWS Event Router is a pod that watches Kubernetes events and logs them for collection by the logging. You must manually deploy the Event Router. The Event Router collects events from all projects and writes them to STDOUT . The collector then forwards those events to the store defined in the ClusterLogForwarder custom resource (CR). Important The Event Router adds additional load to Fluentd and can impact the number of other log messages that can be processed. 9.6.1. Deploying and configuring the Event Router Use the following steps to deploy the Event Router into your cluster. You should always deploy the Event Router to the openshift-logging project to ensure it collects events from across the cluster. Note The Event Router image is not a part of the Red Hat OpenShift Logging Operator and must be downloaded separately. The following Template object creates the service account, cluster role, and cluster role binding required for the Event Router. The template also configures and deploys the Event Router pod. You can either use this template without making changes or edit the template to change the deployment object CPU and memory requests. Prerequisites You need proper permissions to create service accounts and update cluster role bindings. For example, you can run the following template with a user that has the cluster-admin role. The Red Hat OpenShift Logging Operator must be installed. Procedure Create a template for the Event Router: apiVersion: template.openshift.io/v1 kind: Template metadata: name: eventrouter-template annotations: description: "A pod forwarding kubernetes events to OpenShift Logging stack." tags: "events,EFK,logging,cluster-logging" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [""] resources: ["events"] verbs: ["get", "watch", "list"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { "sink": "stdout" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" spec: selector: matchLabels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" replicas: 1 template: metadata: labels: component: "eventrouter" logging-infra: "eventrouter" provider: "openshift" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: "registry.redhat.io/openshift-logging/eventrouter-rhel9:v0.4" - name: CPU 7 displayName: CPU value: "100m" - name: MEMORY 8 displayName: Memory value: "128Mi" - name: NAMESPACE displayName: Namespace value: "openshift-logging" 9 1 Creates a Service Account in the openshift-logging project for the Event Router. 2 Creates a ClusterRole to monitor for events in the cluster. 3 Creates a ClusterRoleBinding to bind the ClusterRole to the service account. 4 Creates a config map in the openshift-logging project to generate the required config.json file. 5 Creates a deployment in the openshift-logging project to generate and configure the Event Router pod. 6 Specifies the image, identified by a tag such as v0.4 . 7 Specifies the minimum amount of CPU to allocate to the Event Router pod. Defaults to 100m . 8 Specifies the minimum amount of memory to allocate to the Event Router pod. Defaults to 128Mi . 9 Specifies the openshift-logging project to install objects in. Use the following command to process and apply the template: USD oc process -f <templatefile> | oc apply -n openshift-logging -f - For example: USD oc process -f eventrouter.yaml | oc apply -n openshift-logging -f - Example output serviceaccount/eventrouter created clusterrole.rbac.authorization.k8s.io/event-reader created clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created Validate that the Event Router installed in the openshift-logging project: View the new Event Router pod: USD oc get pods --selector component=eventrouter -o name -n openshift-logging Example output pod/cluster-logging-eventrouter-d649f97c8-qvv8r View the events collected by the Event Router: USD oc logs <cluster_logging_eventrouter_pod> -n openshift-logging For example: USD oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging Example output {"verb":"ADDED","event":{"metadata":{"name":"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","namespace":"openshift-service-catalog-removed","selfLink":"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f","uid":"787d7b26-3d2f-4017-b0b0-420db4ae62c0","resourceVersion":"21399","creationTimestamp":"2020-09-08T15:40:26Z"},"involvedObject":{"kind":"Job","namespace":"openshift-service-catalog-removed","name":"openshift-service-catalog-controller-manager-remover","uid":"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f","apiVersion":"batch/v1","resourceVersion":"21280"},"reason":"Completed","message":"Job completed","source":{"component":"job-controller"},"firstTimestamp":"2020-09-08T15:40:26Z","lastTimestamp":"2020-09-08T15:40:26Z","count":1,"type":"Normal"}} You can also use Kibana to view events by creating an index pattern using the Elasticsearch infra index. | [
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: logs: type: vector vector: {}",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"}",
"pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json",
"{\"structured\": { \"level\": \"info\", \"name\": \"fred\", \"home\": \"bedrock\" }, \"more fields...\"}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: - application outputRefs: - default parse: json 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"apache\", ...}} }",
"{ \"structured\":{\"name\":\"wilma\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"google\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"openshift\":{\"labels\":{\"myLabel\": \"myValue\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json",
"oc create -f <filename>.yaml",
"oc delete pod --selector logging-infra=collector",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat enableStructuredContainerLogs: true 2 pipelines: - inputRefs: - application name: application-logs outputRefs: - default parse: json",
"apiVersion: v1 kind: Pod metadata: annotations: containerType.logging.openshift.io/heavy: heavy 1 containerType.logging.openshift.io/low: low spec: containers: - name: heavy 2 image: heavyimage - name: low image: lowimage",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-secure 4 type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 5 type: \"elasticsearch\" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 6 type: \"kafka\" url: tls://kafka.secure.com:9093/app-topic inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 8 inputRefs: - audit outputRefs: - elasticsearch-secure - default labels: secure: \"true\" 9 datacenter: \"east\" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: \"west\" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: \"south\"",
"oc create secret generic -n <namespace> <secret_name> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 pipelines: - inputRefs: - <log_type> 4 outputRefs: - <output_name> 5 outputs: - name: <output_name> 6 type: <output_type> 7 url: <log_output_url> 8",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: tuning: delivery: AtLeastOnce 1 compression: none 2 maxWrite: <integer> 3 minRetryDuration: 1s 4 maxRetryDuration: 1s 5",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-app-logs inputRefs: - application outputRefs: - default detectMultilineErrors: true",
"[transforms.detect_exceptions_app-logs] type = \"detect_exceptions\" inputs = [\"application\"] languages = [\"All\"] group_by = [\"kubernetes.namespace_name\",\"kubernetes.pod_name\",\"kubernetes.container_name\"] expire_after_ms = 2000 multiline_flush_interval_ms = 1000",
"<label @MULTILINE_APP_LOGS> <match kubernetes.**> @type detect_exceptions remove_tag_prefix 'kubernetes' message message force_line_breaks true multiline_flush_interval .2 </match> </label>",
"oc -n openshift-logging create secret generic vector-splunk-secret --from-literal hecToken=<HEC_Token>",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: splunk-receiver 4 secret: name: vector-splunk-secret 5 type: splunk 6 url: <http://your.splunk.hec.url:8088> 7 pipelines: 8 - inputRefs: - application - infrastructure name: 9 outputRefs: - splunk-receiver 10",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: httpout-app type: http url: 4 http: headers: 5 h1: v1 h2: v2 method: POST secret: name: 6 tls: insecureSkipVerify: 7 pipelines: - name: inputRefs: - application outputRefs: - httpout-app 8",
"apiVersion: v1 kind: Secret metadata: name: my-secret namespace: openshift-logging type: Opaque data: shared_key: <your_shared_key> 1",
"Get-AzOperationalInsightsWorkspaceSharedKey -ResourceGroupName \"<resource_name>\" -Name \"<workspace_name>\"",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id 1 logType: my_log_type 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor-app type: azureMonitor azureMonitor: customerId: my-customer-id logType: application_log 1 secret: name: my-secret - name: azure-monitor-infra type: azureMonitor azureMonitor: customerId: my-customer-id logType: infra_log # secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor-app - name: infra-pipeline inputRefs: - infrastructure outputRefs: - azure-monitor-infra",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogForwarder\" metadata: name: instance namespace: openshift-logging spec: outputs: - name: azure-monitor type: azureMonitor azureMonitor: customerId: my-customer-id logType: my_log_type azureResourceId: \"/subscriptions/111111111\" 1 host: \"ods.opinsights.azure.com\" 2 secret: name: my-secret pipelines: - name: app-pipeline inputRefs: - application outputRefs: - azure-monitor",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project 8 pipelines: - name: forward-to-fluentd-insecure 9 inputRefs: 10 - my-app-logs outputRefs: 11 - fluentd-server-insecure labels: project: \"my-project\" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application 14 - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: \"C1234\"",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 inputs: 5 - name: myAppLogData application: selector: matchLabels: 6 environment: production app: nginx namespaces: 7 - app1 - app2 outputs: 8 - <output_name>",
"- inputRefs: [ myAppLogData, myOtherAppLogData ]",
"oc create -f <file-name>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 outputRefs: default filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: loki-insecure 4 type: \"loki\" 5 url: http://loki.insecure.com:3100 6 loki: tenantKey: kubernetes.namespace_name labelKeys: - kubernetes.labels.foo - name: loki-secure 7 type: \"loki\" url: https://loki.secure.com:3100 secret: name: loki-secret 8 loki: tenantKey: kubernetes.namespace_name 9 labelKeys: - kubernetes.labels.foo 10 pipelines: - name: application-logs 11 inputRefs: 12 - application - audit outputRefs: 13 - loki-secure",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: elasticsearch-example 4 type: elasticsearch 5 elasticsearch: version: 8 6 url: http://elasticsearch.example.com:9200 7 secret: name: es-secret 8 pipelines: - name: application-logs 9 inputRefs: 10 - application - audit outputRefs: - elasticsearch-example 11 - default 12 labels: myLabel: \"myValue\" 13",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: <username> password: <password>",
"oc create secret -n openshift-logging openshift-test-secret.yaml",
"kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 labels: clusterId: \"C1234\" 11 - name: forward-to-fluentd-insecure 12 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: \"C1234\"",
"oc create -f <file-name>.yaml",
"input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } }",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: rsyslog-east 4 type: syslog 5 syslog: 6 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 7 secret: 8 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'tcp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 9 inputRefs: 10 - audit - application outputRefs: 11 - rsyslog-east - default 12 labels: secure: \"true\" 13 syslog: \"east\" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: \"west\"",
"oc create -f <filename>.yaml",
"spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout",
"<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {\"msgcontent\"=>\"Message Contents\", \"timestamp\"=>\"2020-11-15 17:06:09\", \"tag_key\"=>\"rec_tag\", \"index\"=>56}",
"<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={\"msgcontent\":\"My life is my message\", \"timestamp\":\"2020-11-16 10:49:36\", \"tag_key\":\"rec_tag\", \"index\":76}",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: app-logs 4 type: kafka 5 url: tls://kafka.example.devlab.com:9093/app-topic 6 secret: name: kafka-secret 7 - name: infra-logs type: kafka url: tcp://kafka.devlab2.example.com:9093/infra-topic 8 - name: audit-logs type: kafka url: tls://kafka.qelab.example.com:9093/audit-topic secret: name: kafka-secret-qe pipelines: - name: app-topic 9 inputRefs: 10 - application outputRefs: 11 - app-logs labels: logType: \"application\" 12 - name: infra-topic 13 inputRefs: - infrastructure outputRefs: - infra-logs labels: logType: \"infra\" - name: audit-topic inputRefs: - audit outputRefs: - audit-logs labels: logType: \"audit\"",
"spec: outputs: - name: app-logs type: kafka secret: name: kafka-secret-dev kafka: 1 brokers: 2 - tls://kafka-broker1.example.com:9093/ - tls://kafka-broker2.example.com:9093/ topic: app-topic 3",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=",
"oc apply -f cw-secret.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: cw-secret 9 pipelines: - name: infra-logs 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12",
"oc create -f <file-name>.yaml",
"oc get Infrastructure/cluster -ojson | jq .status.infrastructureName \"mycluster-7977k\"",
"oc run busybox --image=busybox -- sh -c 'while true; do echo \"My life is my message\"; sleep 3; done' oc logs -f busybox My life is my message My life is my message My life is my message",
"oc get ns/app -ojson | jq .metadata.uid \"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\"",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.application\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName \"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log\"",
"aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { \"events\": [ { \"timestamp\": 1629422704178, \"message\": \"{\\\"docker\\\":{\\\"container_id\\\":\\\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\\\"},\\\"kubernetes\\\":{\\\"container_name\\\":\\\"busybox\\\",\\\"namespace_name\\\":\\\"app\\\",\\\"pod_name\\\":\\\"busybox\\\",\\\"container_image\\\":\\\"docker.io/library/busybox:latest\\\",\\\"container_image_id\\\":\\\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\\\",\\\"pod_id\\\":\\\"870be234-90a3-4258-b73f-4f4d6e2777c7\\\",\\\"host\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"labels\\\":{\\\"run\\\":\\\"busybox\\\"},\\\"master_url\\\":\\\"https://kubernetes.default.svc\\\",\\\"namespace_id\\\":\\\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\\\",\\\"namespace_labels\\\":{\\\"kubernetes_io/metadata_name\\\":\\\"app\\\"}},\\\"message\\\":\\\"My life is my message\\\",\\\"level\\\":\\\"unknown\\\",\\\"hostname\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"pipeline_metadata\\\":{\\\"collector\\\":{\\\"ipaddr4\\\":\\\"10.0.216.3\\\",\\\"inputname\\\":\\\"fluent-plugin-systemd\\\",\\\"name\\\":\\\"fluentd\\\",\\\"received_at\\\":\\\"2021-08-20T01:25:08.085760+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-20T01:25:04.178986+00:00\\\",\\\"viaq_index_name\\\":\\\"app-write\\\",\\\"viaq_msg_id\\\":\\\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\\\",\\\"log_type\\\":\\\"application\\\",\\\"time\\\":\\\"2021-08-20T01:25:04+00:00\\\"}\", \"ingestionTime\": 1629422744016 },",
"cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"demo-group-prefix.application\" \"demo-group-prefix.audit\" \"demo-group-prefix.infrastructure\"",
"cloudwatch: groupBy: namespaceName region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.app\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"cloudwatch: groupBy: namespaceUUID region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf\" // uid of the \"app\" namespace \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"oc create secret generic cw-sts-secret -n openshift-logging --from-literal=role_arn=arn:aws:iam::123456789012:role/my-role_with-permissions",
"apiVersion: v1 kind: Secret metadata: namespace: openshift-logging name: my-secret-name stringData: role_arn: arn:aws:iam::123456789012:role/my-role_with-permissions",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"logs:CreateLogGroup\", \"logs:CreateLogStream\", \"logs:DescribeLogGroups\", \"logs:DescribeLogStreams\", \"logs:PutLogEvents\", \"logs:PutRetentionPolicy\" ], \"Resource\": \"arn:aws:logs:*:*:*\" } ] }",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::<your_aws_account_id>:oidc-provider/<openshift_oidc_provider>\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"<openshift_oidc_provider>:sub\": \"system:serviceaccount:openshift-logging:logcollector\" 2 } } } ] }",
"rosa describe cluster -c USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}') -o yaml | awk '/oidc_endpoint_url/ {print USD2}' | cut -d '/' -f 3,4",
"aws iam create-role --role-name \"<your_rosa_cluster_name>-RosaCloudWatch\" --assume-role-policy-document file://<your_trust_file_name>.json --query Role.Arn --output text",
"aws iam create-policy --policy-name \"RosaCloudWatch\" --policy-document file:///<your_policy_file_name>.json --query Policy.Arn --output text",
"aws iam attach-role-policy --role-name \"<your_rosa_cluster_name>-RosaCloudWatch\" --policy-arn <policy_ARN> 1",
"apiVersion: v1 kind: Secret metadata: name: cloudwatch-credentials namespace: openshift-logging stringData: credentials: |- [default] sts_regional_endpoints = regional role_arn: <role_ARN> 1 web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token",
"oc apply -f cloudwatch-credentials.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> 1 namespace: <log_forwarder_namespace> 2 spec: serviceAccountName: <service_account_name> 3 outputs: - name: cw 4 type: cloudwatch 5 cloudwatch: groupBy: logType 6 groupPrefix: <group prefix> 7 region: us-east-2 8 secret: name: <your_secret_name> 9 pipelines: - name: to-cloudwatch 10 inputRefs: 11 - infrastructure - audit - application outputRefs: - cw 12",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: spec: collection: type: <log_collector_type> 1 resources: {} tolerations: {}",
"oc apply -f <filename>.yaml",
"apiVersion: logging.openshift.io/v1alpha1 kind: LogFileMetricExporter metadata: name: instance namespace: openshift-logging spec: nodeSelector: {} 1 resources: 2 limits: cpu: 500m memory: 256Mi requests: cpu: 200m memory: 128Mi tolerations: [] 3",
"oc apply -f <filename>.yaml",
"oc get pods -l app.kubernetes.io/component=logfilesmetricexporter -n openshift-logging",
"NAME READY STATUS RESTARTS AGE logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: type: fluentd resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi",
"apiVersion: logging.openshift.io/v1beta1 kind: ClusterLogForwarder metadata: spec: serviceAccountName: <service_account_name> inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - name: http-pipeline inputRefs: - http-receiver",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: inputs: - name: http-receiver 1 receiver: type: http 2 http: format: kubeAPIAudit 3 port: 8443 4 pipelines: 5 - inputRefs: - http-receiver name: http-pipeline",
"oc apply -f <filename>.yaml",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9",
"oc get pods -l component=collector -n openshift-logging",
"oc extract configmap/collector-config --confirm",
"<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size \"#{ENV['TOTAL_LIMIT_SIZE_PER_BUFFER'] || '8589934592'}\" chunk_limit_size 8m overflow_action throw_exception disable_chunk_backup true </buffer>",
"apiVersion: template.openshift.io/v1 kind: Template metadata: name: eventrouter-template annotations: description: \"A pod forwarding kubernetes events to OpenShift Logging stack.\" tags: \"events,EFK,logging,cluster-logging\" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [\"\"] resources: [\"events\"] verbs: [\"get\", \"watch\", \"list\"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { \"sink\": \"stdout\" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" spec: selector: matchLabels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" replicas: 1 template: metadata: labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: \"registry.redhat.io/openshift-logging/eventrouter-rhel9:v0.4\" - name: CPU 7 displayName: CPU value: \"100m\" - name: MEMORY 8 displayName: Memory value: \"128Mi\" - name: NAMESPACE displayName: Namespace value: \"openshift-logging\" 9",
"oc process -f <templatefile> | oc apply -n openshift-logging -f -",
"oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -",
"serviceaccount/eventrouter created clusterrole.rbac.authorization.k8s.io/event-reader created clusterrolebinding.rbac.authorization.k8s.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created",
"oc get pods --selector component=eventrouter -o name -n openshift-logging",
"pod/cluster-logging-eventrouter-d649f97c8-qvv8r",
"oc logs <cluster_logging_eventrouter_pod> -n openshift-logging",
"oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging",
"{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"namespace\":\"openshift-service-catalog-removed\",\"selfLink\":\"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"uid\":\"787d7b26-3d2f-4017-b0b0-420db4ae62c0\",\"resourceVersion\":\"21399\",\"creationTimestamp\":\"2020-09-08T15:40:26Z\"},\"involvedObject\":{\"kind\":\"Job\",\"namespace\":\"openshift-service-catalog-removed\",\"name\":\"openshift-service-catalog-controller-manager-remover\",\"uid\":\"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f\",\"apiVersion\":\"batch/v1\",\"resourceVersion\":\"21280\"},\"reason\":\"Completed\",\"message\":\"Job completed\",\"source\":{\"component\":\"job-controller\"},\"firstTimestamp\":\"2020-09-08T15:40:26Z\",\"lastTimestamp\":\"2020-09-08T15:40:26Z\",\"count\":1,\"type\":\"Normal\"}}"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/logging/log-collection-and-forwarding |
Chapter 21. Solid-State Disk Deployment Guidelines | Chapter 21. Solid-State Disk Deployment Guidelines Solid-state disks (SSD) are storage devices that use NAND flash chips to persistently store data. This sets them apart from generations of disks, which store data in rotating, magnetic platters. In an SSD, the access time for data across the full Logical Block Address (LBA) range is constant; whereas with older disks that use rotating media, access patterns that span large address ranges incur seek costs. As such, SSD devices have better latency and throughput. Performance degrades as the number of used blocks approaches the disk capacity. The degree of performance impact varies greatly by vendor. However, all devices experience some degradation. To address the degradation issue, the host system (for example, the Linux kernel) may use discard requests to inform the storage that a given range of blocks is no longer in use. An SSD can use this information to free up space internally, using the free blocks for wear-leveling. Discards will only be issued if the storage advertises support in terms of its storage protocol (be it ATA or SCSI). Discard requests are issued to the storage using the negotiated discard command specific to the storage protocol ( TRIM command for ATA, and WRITE SAME with UNMAP set, or UNMAP command for SCSI). Enabling discard support is most useful when the following points are true: Free space is still available on the file system. Most logical blocks on the underlying storage device have already been written to. For more information about UNMAP , see the section 4.7.3.4 of the SCSI Block Commands 3 T10 Specification . Note Not all solid-state devices in the market have discard support. To determine if your solid-state device has discard support, check for /sys/block/sda/queue/discard_granularity , which is the size of internal allocation unit of device. Deployment Considerations Because of the internal layout and operation of SSDs, it is best to partition devices on an internal erase block boundary . Partitioning utilities in Red Hat Enterprise Linux 7 chooses sane defaults if the SSD exports topology information. However, if the device does not export topology information, Red Hat recommends that the first partition should be created at a 1MB boundary. SSD has various types of TRIM mechanism depending on the vendors choice. The early versions of disks improved the performance by compromising possible data leakage after the read command. Following are the types of TRIM mechanism: Non-deterministic TRIM Deterministic TRIM (DRAT) Deterministic Read Zero after TRIM (RZAT) The first two types of TRIM mechanism can cause data leakage as the read command to the LBA after a TRIM returns different or same data. RZAT returns zero after the read command and Red Hat recommends this TRIM mechanism to avoid data leakage. It is affected only in SSD. Choose the disk which supports RZAT mechanism. Type of TRIM mechanism used depends on hardware implementation. To find the type of TRIM mechanism on ATA, use the hdparm command. See the following example to find the type of TRIM mechanism: For more information, see man hdparm . The Logical Volume Manager (LVM), the device-mapper (DM) targets, and MD (software raid) targets that LVM uses support discards. The only DM targets that do not support discards are dm-snapshot, dm-crypt, and dm-raid45. Discard support for the dm-mirror was added in Red Hat Enterprise Linux 6.1 and as of 7.0 MD supports discards. Using RAID level 5 over SSD results in low performance if SSDs do not handle discard correctly. You can set discard in the raid456.conf file, or in the GRUB2 configuration. For instructions, see the following procedures. Procedure 21.1. Setting discard in raid456.conf The devices_handle_discard_safely module parameter is set in the raid456 module. To enable discard in the raid456.conf file: Verify that your hardware supports discards: If the returned value is 1 , discards are supported. If the command returns 0 , the RAID code has to zero the disk out, which takes more time. Create the /etc/modprobe.d/raid456.conf file, and include the following line: Use the dracut -f command to rebuild the initial ramdisk ( initrd ). Reboot the system for the changes to take effect. Procedure 21.2. Setting discard in the GRUB2 Configuration The devices_handle_discard_safely module parameter is set in the raid456 module. To enable discard in the GRUB2 configuration: Verify that your hardware supports discards: If the returned value is 1 , discards are supported. If the command returns 0 , the RAID code has to zero the disk out, which takes more time. Add the following line to the /etc/default/grub file: The location of the GRUB2 configuration file is different on systems with the BIOS firmware and on systems with UEFI. Use one of the following commands to recreate the GRUB2 configuration file. On a system with the BIOS firmware, use: On a system with the UEFI firmware, use: Reboot the system for the changes to take effect. Note In Red Hat Enterprise Linux 7, discard is fully supported by the ext4 and XFS file systems only. In Red Hat Enterprise Linux 6.3 and earlier, only the ext4 file system fully supports discard. Starting with Red Hat Enterprise Linux 6.4, both ext4 and XFS file systems fully support discard. To enable discard commands on a device, use the discard option of the mount command. For example, to mount /dev/sda2 to /mnt with discard enabled, use: By default, ext4 does not issue the discard command to, primarily, avoid problems on devices which might not properly implement discard. The Linux swap code issues discard commands to discard-enabled devices, and there is no option to control this behavior. Performance Tuning Considerations For information on performance tuning considerations regarding solid-state disks, see the Solid-State Disks section in the Red Hat Enterprise Linux 7 Performance Tuning Guide . | [
"hdparm -I /dev/sda | grep TRIM Data Set Management TRIM supported (limit 8 block) Deterministic read data after TRIM",
"cat /sys/block/ disk-name /queue/discard_zeroes_data",
"options raid456 devices_handle_discard_safely=Y",
"cat /sys/block/ disk-name /queue/discard_zeroes_data",
"raid456.devices_handle_discard_safely=Y",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg",
"mount -t ext4 -o discard /dev/sda2 /mnt"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-ssd |
8.245. xorg-x11-xinit | 8.245. xorg-x11-xinit 8.245.1. RHBA-2013:1538 - xorg-x11-xinit bug fix update Updated xorg-x11-xinit packages that fix one bug are now available for Red Hat Enterprise Linux 6. X.Org is an open source implementation of the X Window System providing basic low-level functionality that full-fledged desktop environments such as GNOME and KDE are built on top of. The xorg-x11-xinit packages contain the X.Org X Window System xinit startup scripts. Bug Fix BZ# 811289 Previously, the startx script did not handle the xserverrc file properly. If the xserverrc file existed in the /etc/X11/xinit/ directory, the script failed with the following error message: Fatal server error: Unrecognized option: /etc/X11/xinit/xserverrc With this update, the X session is started using options from the xserverrc file, and the startx script now properly handles the xserverrc file. Users of xorg-x11-xinit are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/xorg-x11-xinit |
Managing overcloud observability | Managing overcloud observability Red Hat OpenStack Platform 17.1 Tracking physical and virtual resources, and collecting metrics OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_overcloud_observability/index |
Chapter 9. Error handling | Chapter 9. Error handling Errors in AMQ JavaScript can be handled by intercepting named events corresponding to AMQP protocol or connection errors. 9.1. Handling connection and protocol errors You can handle protocol-level errors by intercepting the following events: connection_error session_error sender_error receiver_error protocol_error error These events are fired whenever there is an error condition with the specific object that is in the event. After calling the error handler, the corresponding <object> _close handler is also called. The event argument has an error attribute for accessing the error object. Example: Handling errors container.on("error", function (event) { console.log("An error!", event.error); }); Note Because the close handlers are called in the event of any error, only the error itself needs to be handled within the error handler. Resource cleanup can be managed by close handlers. If there is no error handling that is specific to a particular object, it is typical to handle the general error event and not have a more specific handler. Note When reconnect is enabled and the remote server closes a connection with the amqp:connection:forced condition, the client does not treat it as an error and thus does not fire the connection_error event. The client instead begins the reconnection process. | [
"container.on(\"error\", function (event) { console.log(\"An error!\", event.error); });"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_javascript_client/error_handling |
7.4. Modifying Cache Entries | 7.4. Modifying Cache Entries After the cache entry has been created, the cache entry can be modified programmatically. Report a bug 7.4.1. Cache Entry Modified Listener Configuration In a cache entry modified listener event, The getValue() method's behavior is specific to whether the callback is triggered before or after the actual operation has been performed. For example, if event.isPre() is true, then event.getValue() would return the old value, prior to modification. If event.isPre() is false, then event.getValue() would return new value. If the event is creating and inserting a new entry, the old value would be null. For more information about isPre() , see the Red Hat JBoss Data Grid API Documentation 's listing for the org.infinispan.notifications.cachelistener.event package. Listeners can only be configured programmatically by using the methods exposed by the Listenable and FilteringListenable interfaces (which the Cache object implements). Report a bug 7.4.2. Cache Entry Modified Listener Example The following example defines a listener in Red Hat JBoss Data Grid that prints some information each time a cache entry is modified: Example 7.2. Modified Listener Report a bug | [
"@Listener public class PrintWhenModified { @CacheEntryModified public void print(CacheEntryModifiedEvent event) { System.out.println(\"Cache entry modified. Details = \" + event\"); } }"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/sect-modifying_cache_entries |
10.2.4.5. The mod_auth_dbm and mod_auth_db Modules | 10.2.4.5. The mod_auth_dbm and mod_auth_db Modules Apache HTTP Server 1.3 supported two authentication modules, mod_auth_db and mod_auth_dbm , which used Berkeley Databases and DBM databases respectively. These modules have been combined into a single module named mod_auth_dbm in Apache HTTP Server 2.0, which can access several different database formats. To migrate from mod_auth_db , configuration files should be adjusted by replacing AuthDBUserFile and AuthDBGroupFile with the mod_auth_dbm equivalents, AuthDBMUserFile and AuthDBMGroupFile . Also, the directive AuthDBMType DB must be added to indicate the type of database file in use. The following example shows a sample mod_auth_db configuration for Apache HTTP Server 1.3: To migrate this setting to version 2.0 of Apache HTTP Server, use the following structure: Note that the AuthDBMUserFile directive can also be used in .htaccess files. The dbmmanage Perl script, used to manipulate username and password databases, has been replaced by htdbm in Apache HTTP Server 2.0. The htdbm program offers equivalent functionality and, like mod_auth_dbm , can operate a variety of database formats; the -T option can be used on the command line to specify the format to use. Table 10.1, "Migrating from dbmmanage to htdbm " shows how to migrate from a DBM-format database to htdbm format using dbmmanage . Table 10.1. Migrating from dbmmanage to htdbm Action dbmmanage command (1.3) Equivalent htdbm command (2.0) Add user to database (using given password) dbmmanage authdb add username password htdbm -b -TDB authdb username password Add user to database (prompts for password) dbmmanage authdb adduser username htdbm -TDB authdb username Remove user from database dbmmanage authdb delete username htdbm -x -TDB authdb username List users in database dbmmanage authdb view htdbm -l -TDB authdb Verify a password dbmmanage authdb check username htdbm -v -TDB authdb username The -m and -s options work with both dbmmanage and htdbm , enabling the use of the MD5 or SHA1 algorithms for hashing passwords, respectively. When creating a new database with htdbm , the -c option must be used. For more on this topic, refer to the following documentation on the Apache Software Foundation's website: http://httpd.apache.org/docs-2.0/mod/mod_auth_dbm.html | [
"<Location /private/> AuthType Basic AuthName \"My Private Files\" AuthDBUserFile /var/www/authdb require valid-user </Location>",
"<Location /private/> AuthType Basic AuthName \"My Private Files\" AuthDBMUserFile /var/www/authdb AuthDBMType DB require valid-user </Location>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-httpd-v2-mig-mod-dbm |
4.22. IF-MIB | 4.22. IF-MIB Table 4.23, "IF MIB" lists the fence device parameters used by fence_ifmib , the fence agent for IF-MIB devices. Table 4.23. IF MIB luci Field cluster.conf Attribute Description Name name A name for the IF MIB device connected to the cluster. IP Address or Hostname ipaddr The IP address or host name assigned to the device. UDP/TCP Port (optional) udpport The UDP/TCP port to use for connection with the device; the default value is 161. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. SNMP Version snmp_version The SNMP version to use (1, 2c, 3); the default value is 1. SNMP Community community The SNMP community string. SNMP Security Level snmp_sec_level The SNMP security level (noAuthNoPriv, authNoPriv, authPriv). SNMP Authentication Protocol snmp_auth_prot The SNMP authentication protocol (MD5, SHA). SNMP Privacy Protocol snmp_priv_prot The SNMP privacy protocol (DES, AES). SNMP Privacy Protocol Password snmp_priv_passwd The SNMP privacy protocol password. SNMP Privacy Protocol Script snmp_priv_passwd_script The script that supplies a password for SNMP privacy protocol. Using this supersedes the SNMP privacy protocol password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port (Outlet) Number port Physical plug number or name of virtual machine. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Figure 4.17, "IF-MIB" shows the configuration screen for adding an IF-MIB fence device. Figure 4.17. IF-MIB The following command creates a fence device instance for an IF-MIB device: The following is the cluster.conf entry for the fence_ifmib device: | [
"ccs -f cluster.conf --addfencedev ifmib1 agent=fence_ifmib community=private ipaddr=192.168.0.1 login=root passwd=password123 snmp_priv_passwd=snmpasswd123 power_wait=60 udpport=161",
"<fencedevices> <fencedevice agent=\"fence_ifmib\" community=\"private\" ipaddr=\"192.168.0.1\" login=\"root\" name=\"ifmib1\" passwd=\"password123\" power_wait=\"60\" snmp_priv_passwd=\"snmpasswd123\" udpport=\"161\"/> </fencedevices>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-ifmib-ca |
Chapter 7. Red Hat Quay Application Programming Interface (API) | Chapter 7. Red Hat Quay Application Programming Interface (API) This API allows you to perform many of the operations required to work with Red Hat Quay repositories, users, and organizations. 7.1. Authorization oauth2_implicit Scopes The following scopes are used to control access to the API endpoints: Scope Description repo:read This application will be able to view and pull all repositories visible to the granting user or robot account repo:write This application will be able to view, push and pull to all repositories to which the granting user or robot account has write access repo:admin This application will have administrator access to all repositories to which the granting user or robot account has access repo:create This application will be able to create repositories in to any namespaces that the granting user or robot account is allowed to create repositories user:read This application will be able to read user information such as username and email address. org:admin This application will be able to administer your organizations including creating robots, creating teams, adjusting team membership, and changing billing settings. You should have absolute trust in the requesting application before granting this permission. super:user This application will be able to administer your installation including managing users, managing organizations and other features found in the superuser panel. You should have absolute trust in the requesting application before granting this permission. user:admin This application will be able to administer your account including creating robots and granting them permissions to your repositories. You should have absolute trust in the requesting application before granting this permission. 7.2. appspecifictokens Manages app specific tokens for the current user. 7.2.1. createAppToken Create a new app specific token for user. POST /api/v1/user/apptoken Authorizations: oauth2_implicit ( user:admin ) Request body schema (application/json) Description of a new token. Name Description Schema title required Friendly name to help identify the token string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "title": "MyAppToken" }' \ "http://quay-server.example.com/api/v1/user/apptoken" 7.2.2. listAppTokens Lists the app specific tokens for the user. GET /api/v1/user/apptoken Authorizations: oauth2_implicit ( user:admin ) Query parameters Type Name Description Schema query expiring optional If true, only returns those tokens expiring soon boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <access_token>" \ "http://quay-server.example.com/api/v1/user/apptoken" 7.2.3. getAppToken Returns a specific app token for the user. GET /api/v1/user/apptoken/{token_uuid} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path token_uuid required The uuid of the app specific token string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <access_token>" \ "http://quay-server.example.com/api/v1/user/apptoken/<token_uuid>" 7.2.4. revokeAppToken Revokes a specific app token for the user. DELETE /api/v1/user/apptoken/{token_uuid} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path token_uuid required The uuid of the app specific token string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <access_token>" \ "http://quay-server.example.com/api/v1/user/apptoken/<token_uuid>" 7.3. build Create, list, cancel and get status/logs of repository builds. 7.3.1. getRepoBuildStatus Return the status for the builds specified by the build uuids. GET /api/v1/repository/{repository}/build/{build_uuid}/status Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.3.2. getRepoBuildLogs Return the build logs for the build specified by the build uuid. GET /api/v1/repository/{repository}/build/{build_uuid}/logs Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.3.3. getRepoBuild Returns information about a build. GET /api/v1/repository/{repository}/build/{build_uuid} Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.3.4. cancelRepoBuild Cancels a repository build. DELETE /api/v1/repository/{repository}/build/{build_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.3.5. requestRepoBuild Request that a repository be built and pushed from the specified input. POST /api/v1/repository/{repository}/build/ Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Description of a new repository build. Name Description Schema file_id optional The file id that was generated when the build spec was uploaded string archive_url optional The URL of the .tar.gz to build. Must start with "http" or "https". string subdirectory optional Subdirectory in which the Dockerfile can be found. You can only specify this or dockerfile_path string dockerfile_path optional Path to a dockerfile. You can only specify this or subdirectory. string context optional Pass in the context for the dockerfile. This is optional. string pull_robot optional Username of a Quay robot account to use as pull credentials string tags optional The tags to which the built images will be pushed. If none specified, "latest" is used. array of string non-empty unique Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.3.6. getRepoBuilds Get the list of repository builds. GET /api/v1/repository/{repository}/build/ Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query since optional Returns all builds since the given unix timecode integer query limit optional The maximum number of builds to return integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError 7.4. discovery API discovery information. 7.4.1. discovery List all of the API endpoints available in the swagger API format. GET /api/v1/discovery Authorizations: Query parameters Type Name Description Schema query internal optional Whether to include internal APIs. boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/discovery?query=true" \ -H "Authorization: Bearer <access_token>" 7.5. error Error details API. 7.5.1. getErrorDescription Get a detailed description of the error. GET /api/v1/error/{error_type} Authorizations: Path parameters Type Name Description Schema path error_type required The error code identifying the type of error. string Responses HTTP Code Description Schema 200 Successful invocation ApiErrorDescription 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/error/<error_type>" \ -H "Authorization: Bearer <access_token>" 7.6. globalmessages Messages API. 7.6.1. createGlobalMessage Create a message. POST /api/v1/messages Authorizations: oauth2_implicit ( super:user ) Request body schema (application/json) Create a new message Name Description Schema message required A single message object Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/messages" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "message": { "content": "Hi", "media_type": "text/plain", "severity": "info" } }' 7.6.2. getGlobalMessages Return a super users messages. GET /api/v1/messages Authorizations: Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/messages" \ -H "Authorization: Bearer <access_token>" 7.6.3. deleteGlobalMessage Delete a message. DELETE /api/v1/message/{uuid} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path uuid required string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://<quay-server.example.com>/api/v1/message/<uuid>" \ -H "Authorization: Bearer <access_token>" 7.7. logs Access usage logs for organizations or repositories. 7.7.1. getAggregateUserLogs Returns the aggregated logs for the current user. GET /api/v1/user/aggregatelogs Authorizations: oauth2_implicit ( user:admin ) Query parameters Type Name Description Schema query performer optional Username for which to filter logs. string query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "<quay-server.example.com>/api/v1/user/aggregatelogs?performer=<username>&starttime=<MM/DD/YYYY>&endtime=<MM/DD/YYYY>" 7.7.2. exportUserLogs Returns the aggregated logs for the current user. POST /api/v1/user/exportlogs Authorizations: oauth2_implicit ( user:admin ) Query parameters Type Name Description Schema query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Request body schema (application/json) Configuration for an export logs operation Name Description Schema callback_url optional The callback URL to invoke with a link to the exported logs string callback_email optional The e-mail address at which to e-mail a link to the exported logs string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -d '{ "starttime": "<MM/DD/YYYY>", "endtime": "<MM/DD/YYYY>", "callback_email": "[email protected]" }' \ "http://<quay-server.example.com>/api/v1/user/exportlogs" 7.7.3. listUserLogs List the logs for the current user. GET /api/v1/user/logs Authorizations: oauth2_implicit ( user:admin ) Query parameters Type Name Description Schema query next_page optional The page token for the page string query performer optional Username for which to filter logs. string query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET -H "Authorization: Bearer <bearer_token>" -H "Accept: application/json" "<quay-server.example.com>/api/v1/user/logs" 7.7.4. getAggregateOrgLogs Gets the aggregated logs for the specified organization. GET /api/v1/organization/{orgname}/aggregatelogs Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Query parameters Type Name Description Schema query performer optional Username for which to filter logs. string query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "<quay-server.example.com>/api/v1/organization/{orgname}/aggregatelogs" 7.7.5. exportOrgLogs Exports the logs for the specified organization. POST /api/v1/organization/{orgname}/exportlogs Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Query parameters Type Name Description Schema query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Request body schema (application/json) Configuration for an export logs operation Name Description Schema callback_url optional The callback URL to invoke with a link to the exported logs string callback_email optional The e-mail address at which to e-mail a link to the exported logs string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -d '{ "starttime": "<MM/DD/YYYY>", "endtime": "<MM/DD/YYYY>", "callback_email": "[email protected]" }' \ "http://<quay-server.example.com>/api/v1/organization/{orgname}/exportlogs" 7.7.6. listOrgLogs List the logs for the specified organization. GET /api/v1/organization/{orgname}/logs Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Query parameters Type Name Description Schema query next_page optional The page token for the page string query performer optional Username for which to filter logs. string query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "http://<quay-server.example.com>/api/v1/organization/{orgname}/logs" 7.7.7. getAggregateRepoLogs Returns the aggregated logs for the specified repository. GET /api/v1/repository/{repository}/aggregatelogs Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "<quay-server.example.com>/api/v1/repository/<repository_name>/<namespace>/aggregatelogs?starttime=2024-01-01&endtime=2024-06-18"" 7.7.8. exportRepoLogs Queues an export of the logs for the specified repository. POST /api/v1/repository/{repository}/exportlogs Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Request body schema (application/json) Configuration for an export logs operation Name Description Schema callback_url optional The callback URL to invoke with a link to the exported logs string callback_email optional The e-mail address at which to e-mail a link to the exported logs string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -H "Accept: application/json" \ -d '{ "starttime": "2024-01-01", "endtime": "2024-06-18", "callback_url": "http://your-callback-url.example.com" }' \ "http://<quay-server.example.com>/api/v1/repository/{repository}/exportlogs" 7.7.9. listRepoLogs List the logs for the specified repository. GET /api/v1/repository/{repository}/logs Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query next_page optional The page token for the page string query endtime optional Latest time for logs. Format: "%m/%d/%Y" in UTC. string query starttime optional Earliest time for logs. Format: "%m/%d/%Y" in UTC. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "http://<quay-server.example.com>/api/v1/repository/{repository}/logs" 7.8. manifest Manage the manifests of a repository. 7.8.1. getManifestLabel Retrieves the label with the specific ID under the manifest. GET /api/v1/repository/{repository}/manifest/{manifestref}/labels/{labelid} Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path manifestref required The digest of the manifest string path labelid required The ID of the label string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<label_id> 7.8.2. deleteManifestLabel Deletes an existing label from a manifest. DELETE /api/v1/repository/{repository}/manifest/{manifestref}/labels/{labelid} Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path manifestref required The digest of the manifest string path labelid required The ID of the label string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<labelid> 7.8.3. addManifestLabel Adds a new label into the tag manifest. POST /api/v1/repository/{repository}/manifest/{manifestref}/labels Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path manifestref required The digest of the manifest string Request body schema (application/json) Adds a label to a manifest Name Description Schema key required The key for the label string value required The value for the label string media_type required The media type for this label Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "key": "<key>", "value": "<value>", "media_type": "<media_type>" }' \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels 7.8.4. listManifestLabels GET /api/v1/repository/{repository}/manifest/{manifestref}/labels Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path manifestref required The digest of the manifest string Query parameters Type Name Description Schema query filter optional If specified, only labels matching the given prefix will be returned string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels 7.8.5. getRepoManifest GET /api/v1/repository/{repository}/manifest/{manifestref} Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path manifestref required The digest of the manifest string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref> 7.9. mirror 7.9.1. syncCancel Update the sync_status for a given Repository's mirroring configuration. POST /api/v1/repository/{repository}/mirror/sync-cancel Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror/sync-cancel" \ 7.9.2. syncNow Update the sync_status for a given Repository's mirroring configuration. POST /api/v1/repository/{repository}/mirror/sync-now Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror/sync-now" \ -H "Authorization: Bearer <access_token>" 7.9.3. getRepoMirrorConfig Return the Mirror configuration for a given Repository. GET /api/v1/repository/{repository}/mirror Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation ViewMirrorConfig 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror" \ -H "Authorization: Bearer <access_token>" 7.9.4. changeRepoMirrorConfig Allow users to modifying the repository's mirroring configuration. PUT /api/v1/repository/{repository}/mirror Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Update the repository mirroring configuration. Name Description Schema is_enabled optional Used to enable or disable synchronizations. boolean external_reference optional Location of the external repository. string external_registry_username optional Username used to authenticate with external registry. external_registry_password optional Password used to authenticate with external registry. sync_start_date optional Determines the time this repository is ready for synchronization. string sync_interval optional Number of seconds after next_start_date to begin synchronizing. integer robot_username optional Username of robot which will be used for image pushes. string root_rule optional A list of glob-patterns used to determine which tags should be synchronized. object external_registry_config optional object Responses HTTP Code Description Schema 201 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT "https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "is_enabled": <false>, 1 "external_reference": "<external_reference>", "external_registry_username": "<external_registry_username>", "external_registry_password": "<external_registry_password>", "sync_start_date": "<sync_start_date>", "sync_interval": <sync_interval>, "robot_username": "<robot_username>", "root_rule": { "rule": "<rule>", "rule_type": "<rule_type>" } }' 1 Disables automatic synchronization. 7.9.5. createRepoMirrorConfig Create a RepoMirrorConfig for a given Repository. POST /api/v1/repository/{repository}/mirror Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Create the repository mirroring configuration. Name Description Schema is_enabled optional Used to enable or disable synchronizations. boolean external_reference required Location of the external repository. string external_registry_username optional Username used to authenticate with external registry. external_registry_password optional Password used to authenticate with external registry. sync_start_date required Determines the time this repository is ready for synchronization. string sync_interval required Number of seconds after next_start_date to begin synchronizing. integer robot_username required Username of robot which will be used for image pushes. string root_rule required A list of glob-patterns used to determine which tags should be synchronized. object external_registry_config optional object Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "is_enabled": <is_enabled>, "external_reference": "<external_reference>", "external_registry_username": "<external_registry_username>", "external_registry_password": "<external_registry_password>", "sync_start_date": "<sync_start_date>", "sync_interval": <sync_interval>, "robot_username": "<robot_username>", "root_rule": { "rule": "<rule>", "rule_type": "<rule_type>" } }' 7.10. namespacequota 7.10.1. listUserQuota GET /api/v1/user/quota Authorizations: oauth2_implicit ( user:admin ) Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/user/quota" \ -H "Authorization: Bearer <access_token>" 7.10.2. getOrganizationQuotaLimit GET /api/v1/organization/{orgname}/quota/{quota_id}/limit/{limit_id} Authorizations: Path parameters Type Name Description Schema path quota_id required string path limit_id required string path orgname required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit/<limit_id>" \ -H "Authorization: Bearer <access_token>" 7.10.3. changeOrganizationQuotaLimit PUT /api/v1/organization/{orgname}/quota/{quota_id}/limit/{limit_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path quota_id required string path limit_id required string path orgname required string Request body schema (application/json) Description of changing organization quota limit Name Description Schema type optional Type of quota limit: "Warning" or "Reject" string threshold_percent optional Quota threshold, in percent of quota integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit/<limit_id>" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "type": "<type>", "threshold_percent": <threshold_percent> }' 7.10.4. deleteOrganizationQuotaLimit DELETE /api/v1/organization/{orgname}/quota/{quota_id}/limit/{limit_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path quota_id required string path limit_id required string path orgname required string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit/<limit_id>" \ -H "Authorization: Bearer <access_token>" 7.10.5. createOrganizationQuotaLimit POST /api/v1/organization/{orgname}/quota/{quota_id}/limit Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path quota_id required string path orgname required string Request body schema (application/json) Description of a new organization quota limit Name Description Schema type required Type of quota limit: "Warning" or "Reject" string threshold_percent required Quota threshold, in percent of quota integer Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "limit_bytes": 21474836480, "type": "Reject", 1 "threshold_percent": 90 2 }' 7.10.6. listOrganizationQuotaLimit GET /api/v1/organization/{orgname}/quota/{quota_id}/limit Authorizations: Path parameters Type Name Description Schema path quota_id required string path orgname required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit" \ -H "Authorization: Bearer <access_token>" 7.10.7. getUserQuotaLimit GET /api/v1/user/quota/{quota_id}/limit/{limit_id} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path quota_id required string path limit_id required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/user/quota/{quota_id}/limit/{limit_id}" \ -H "Authorization: Bearer <access_token>" 7.10.8. listUserQuotaLimit GET /api/v1/user/quota/{quota_id}/limit Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path quota_id required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/user/quota/{quota_id}/limit" \ -H "Authorization: Bearer <access_token>" 7.10.9. getOrganizationQuota GET /api/v1/organization/{orgname}/quota/{quota_id} Authorizations: Path parameters Type Name Description Schema path quota_id required string path orgname required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>" \ -H "Authorization: Bearer <access_token>"S 7.10.10. changeOrganizationQuota PUT /api/v1/organization/{orgname}/quota/{quota_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path quota_id required string path orgname required string Request body schema (application/json) Description of a new organization quota Name Description Schema limit_bytes optional Number of bytes the organization is allowed integer limits optional Human readable storage capacity of the organization. Accepts SI units like Mi, Gi, or Ti, as well as non-standard units like GB or MB. Must be mutually exclusive with limit_bytes . string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "limit_bytes": <limit_in_bytes> }' 7.10.11. deleteOrganizationQuota DELETE /api/v1/organization/{orgname}/quota/{quota_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path quota_id required string path orgname required string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>" \ -H "Authorization: Bearer <access_token>" 7.10.12. createOrganizationQuota Create a new organization quota. POST /api/v1/organization/{orgname}/quota Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path orgname required string Request body schema (application/json) Description of a new organization quota Name Description Schema limit_bytes required Number of bytes the organization is allowed integer limits optional Human readable storage capacity of the organization. Accepts SI units like Mi, Gi, or Ti, as well as non-standard units like GB or MB. Must be mutually exclusive with limit_bytes . string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/organization/<orgname>/quota" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "limit_bytes": 10737418240, "limits": "10 Gi" }' 7.10.13. listOrganizationQuota GET /api/v1/organization/{orgname}/quota Authorizations: Path parameters Type Name Description Schema path orgname required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -k -X GET -H "Authorization: Bearer <token>" -H 'Content-Type: application/json' https://<quay-server.example.com>/api/v1/organization/<organization_name>/quota 7.10.14. getUserQuota GET /api/v1/user/quota/{quota_id} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path quota_id required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/user/quota/{quota_id}" \ -H "Authorization: Bearer <access_token>" 7.11. organization Manage organizations, members and OAuth applications. 7.11.1. createOrganization Create a new organization. POST /api/v1/organization/ Authorizations: oauth2_implicit ( user:admin ) Request body schema (application/json) Description of a new organization. Name Description Schema name required Organization username string email optional Organization contact email string recaptcha_response optional The (may be disabled) recaptcha response code for verification string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" -d '{ "name": "<new_organization_name>" }' "https://<quay-server.example.com>/api/v1/organization/" 7.11.2. validateProxyCacheConfig POST /api/v1/organization/{orgname}/validateproxycache Authorizations: Path parameters Type Name Description Schema path orgname required string Request body schema (application/json) Proxy cache configuration for an organization Name Description Schema upstream_registry required Name of the upstream registry that is to be cached string Responses HTTP Code Description Schema 202 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/organization/{orgname}/validateproxycache" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "upstream_registry": "<upstream_registry>" }' 7.11.3. getOrganizationCollaborators List outside collaborators of the specified organization. GET /api/v1/organization/{orgname}/collaborators Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/organization/{orgname}/collaborators" \ -H "Authorization: Bearer <access_token>" 7.11.4. getOrganizationApplication Retrieves the application with the specified client_id under the specified organization. GET /api/v1/organization/{orgname}/applications/{client_id} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path client_id required The OAuth client ID string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/organization/<orgname>/applications/<client_id>" \ -H "Authorization: Bearer <access_token>" 7.11.5. updateOrganizationApplication Updates an application under this organization. PUT /api/v1/organization/{orgname}/applications/{client_id} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path client_id required The OAuth client ID string path orgname required The name of the organization string Request body schema (application/json) Description of an updated application. Name Description Schema name required The name of the application string redirect_uri required The URI for the application's OAuth redirect string application_uri required The URI for the application's homepage string description optional The human-readable description for the application string avatar_email optional The e-mail address of the avatar to use for the application string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT "https://quay-server.example.com/api/v1/organization/test/applications/12345" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "name": "Updated Application Name", "redirect_uri": "https://example.com/oauth/callback", "application_uri": "https://example.com", "description": "Updated description for the application", "avatar_email": "[email protected]" }' 7.11.6. deleteOrganizationApplication Deletes the application under this organization. DELETE /api/v1/organization/{orgname}/applications/{client_id} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path client_id required The OAuth client ID string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://<quay-server.example.com>/api/v1/organization/{orgname}/applications/{client_id}" \ -H "Authorization: Bearer <access_token>" 7.11.7. createOrganizationApplication Creates a new application under this organization. POST /api/v1/organization/{orgname}/applications Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Request body schema (application/json) Description of a new organization application. Name Description Schema name required The name of the application string redirect_uri optional The URI for the application's OAuth redirect string application_uri optional The URI for the application's homepage string description optional The human-readable description for the application string avatar_email optional The e-mail address of the avatar to use for the application string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/organization/<orgname>/applications" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "name": "<app_name>", "redirect_uri": "<redirect_uri>", "application_uri": "<application_uri>", "description": "<app_description>", "avatar_email": "<avatar_email>" }' 7.11.8. getOrganizationApplications List the applications for the specified organization. GET /api/v1/organization/{orgname}/applications Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/organization/<orgname>/applications" \ -H "Authorization: Bearer <access_token>" 7.11.9. getProxyCacheConfig Retrieves the proxy cache configuration of the organization. GET /api/v1/organization/{orgname}/proxycache Authorizations: Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/organization/{orgname}/proxycache" \ -H "Authorization: Bearer <access_token>" 7.11.10. deleteProxyCacheConfig Delete proxy cache configuration for the organization. DELETE /api/v1/organization/{orgname}/proxycache Authorizations: Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://<quay-server.example.com>/api/v1/organization/{orgname}/proxycache" \ -H "Authorization: Bearer <access_token>" 7.11.11. createProxyCacheConfig Creates proxy cache configuration for the organization. POST /api/v1/organization/{orgname}/proxycache Authorizations: Path parameters Type Name Description Schema path orgname required The name of the organization string Request body schema (application/json) Proxy cache configuration for an organization Name Description Schema upstream_registry required Name of the upstream registry that is to be cached string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://<quay-server.example.com>/api/v1/organization/<orgname>/proxycache" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "upstream_registry": "<upstream_registry>" }' 7.11.12. getOrganizationMember Retrieves the details of a member of the organization. GET /api/v1/organization/{orgname}/members/{membername} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path membername required The username of the organization member string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/organization/<orgname>/members/<membername>" \ -H "Authorization: Bearer <access_token>" 7.11.13. removeOrganizationMember Removes a member from an organization, revoking all its repository priviledges and removing it from all teams in the organization. DELETE /api/v1/organization/{orgname}/members/{membername} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path membername required The username of the organization member string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://<quay-server.example.com>/api/v1/organization/<orgname>/members/<membername>" \ -H "Authorization: Bearer <access_token>" 7.11.14. getOrganizationMembers List the human members of the specified organization. GET /api/v1/organization/{orgname}/members Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/organization/<orgname>/members" \ -H "Authorization: Bearer <access_token>" 7.11.15. getOrganization Get the details for the specified organization. GET /api/v1/organization/{orgname} Authorizations: Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>" 7.11.16. changeOrganizationDetails Change the details for the specified organization. PUT /api/v1/organization/{orgname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Request body schema (application/json) Description of updates for an existing organization Name Description Schema email optional Organization contact email string invoice_email optional Whether the organization desires to receive emails for invoices boolean invoice_email_address optional The email address at which to receive invoices tag_expiration_s optional The number of seconds for tag expiration integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command 7.11.17. deleteAdminedOrganization Deletes the specified organization. DELETE /api/v1/organization/{orgname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>" 7.11.18. getApplicationInformation Get information on the specified application. GET /api/v1/app/{client_id} Authorizations: Path parameters Type Name Description Schema path client_id required The OAuth client ID string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://<quay-server.example.com>/api/v1/app/<client_id>" \ -H "Authorization: Bearer <access_token>" 7.12. permission Manage repository permissions. 7.12.1. getUserTransitivePermission Get the fetch the permission for the specified user. GET /api/v1/repository/{repository}/permissions/user/{username}/transitive Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path username required The username of the user to which the permissions apply string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <access_token>" \ "https://quay-server.example.com/api/v1/repository/<repository_path>/permissions/user/<username>/transitive" 7.12.2. getUserPermissions Get the permission for the specified user. GET /api/v1/repository/{repository}/permissions/user/{username} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path username required The username of the user to which the permission applies string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <access_token>" \ "https://quay-server.example.com/api/v1/repository/<repository_path>/permissions/user/<username>" 7.12.3. changeUserPermissions Update the perimssions for an existing repository. PUT /api/v1/repository/{repository}/permissions/user/{username} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path username required The username of the user to which the permission applies string path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Description of a user permission. Name Description Schema role required Role to use for the user string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{"role": "admin"}' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username> 7.12.4. deleteUserPermissions Delete the permission for the user. DELETE /api/v1/repository/{repository}/permissions/user/{username} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path username required The username of the user to which the permission applies string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username> 7.12.5. getTeamPermissions Fetch the permission for the specified team. GET /api/v1/repository/{repository}/permissions/team/{teamname} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path teamname required The name of the team to which the permission applies string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <access_token>" \ "https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/team/<teamname>" 7.12.6. changeTeamPermissions Update the existing team permission. PUT /api/v1/repository/{repository}/permissions/team/{teamname} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path teamname required The name of the team to which the permission applies string Request body schema (application/json) Description of a team permission. Name Description Schema role required Role to use for the team string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError USD curl -X PUT \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{"role": "<role>"}' \ "https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/team/<teamname>" 7.12.7. deleteTeamPermissions Delete the permission for the specified team. DELETE /api/v1/repository/{repository}/permissions/team/{teamname} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path teamname required The name of the team to which the permission applies string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <access_token>" \ "https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/team/<teamname>" 7.12.8. listRepoTeamPermissions List all team permission. GET /api/v1/repository/{repository}/permissions/team/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <access_token>" \ "https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/team/" 7.12.9. listRepoUserPermissions List all user permissions. GET /api/v1/repository/{repository}/permissions/user/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>/ 7.13. policy 7.13.1. createOrganizationAutoPrunePolicy Creates an auto-prune policy for the organization POST /api/v1/organization/{orgname}/autoprunepolicy/ Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Request body schema (application/json) The policy configuration that is to be applied to the user namespace Name Description Schema method required The method to use for pruning tags (number_of_tags, creation_date) string value required The value to use for the pruning method (number of tags e.g. 10, time delta e.g. 7d (7 days)) tagPattern optional Tags only matching this pattern will be pruned string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern boolean Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST -H "Authorization: Bearer <access_token>" -H "Content-Type: application/json" -d '{"method": "number_of_tags", "value": 10}' http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/ 7.13.2. listOrganizationAutoPrunePolicies Lists the auto-prune policies for the organization GET /api/v1/organization/{orgname}/autoprunepolicy/ Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/organization/example_org/autoprunepolicy/" \ -H "Authorization: Bearer <your_access_token>" 7.13.3. getOrganizationAutoPrunePolicy Fetches the auto-prune policy for the organization GET /api/v1/organization/{orgname}/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET -H "Authorization: Bearer <access_token>" http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/<policy_uuid> 7.13.4. deleteOrganizationAutoPrunePolicy Deletes the auto-prune policy for the organization DELETE /api/v1/organization/{orgname}/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://quay-server.example.com/api/v1/organization/example_org/autoprunepolicy/example_policy_uuid" \ -H "Authorization: Bearer <your_access_token>" 7.13.5. updateOrganizationAutoPrunePolicy Updates the auto-prune policy for the organization PUT /api/v1/organization/{orgname}/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string path orgname required The name of the organization string Request body schema (application/json) The policy configuration that is to be applied to the user namespace Name Description Schema method required The method to use for pruning tags (number_of_tags, creation_date) string value required The value to use for the pruning method (number of tags e.g. 10, time delta e.g. 7d (7 days)) tagPattern optional Tags only matching this pattern will be pruned string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern boolean Responses HTTP Code Description Schema 204 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" -d '{ "method": "creation_date", "value": "4d", "tagPattern": "^v*", "tagPatternMatches": true }' "<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/<uuid>" 7.13.6. createRepositoryAutoPrunePolicy Creates an auto-prune policy for the repository POST /api/v1/repository/{repository}/autoprunepolicy/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) The policy configuration that is to be applied to the user namespace Name Description Schema method required The method to use for pruning tags (number_of_tags, creation_date) string value required The value to use for the pruning method (number of tags e.g. 10, time delta e.g. 7d (7 days)) tagPattern optional Tags only matching this pattern will be pruned string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern boolean Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST -H "Authorization: Bearer <access_token>" -H "Content-Type: application/json" -d '{"method": "number_of_tags","value": 2}' http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/ 7.13.7. listRepositoryAutoPrunePolicies Lists the auto-prune policies for the repository GET /api/v1/repository/{repository}/autoprunepolicy/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/autoprunepolicy/" \ -H "Authorization: Bearer <your_access_token>" 7.13.8. getRepositoryAutoPrunePolicy Fetches the auto-prune policy for the repository GET /api/v1/repository/{repository}/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/autoprunepolicy/123e4567-e89b-12d3-a456-426614174000" \ -H "Authorization: Bearer <your_access_token>" 7.13.9. deleteRepositoryAutoPrunePolicy Deletes the auto-prune policy for the repository DELETE /api/v1/repository/{repository}/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/autoprunepolicy/123e4567-e89b-12d3-a456-426614174000" \ -H "Authorization: Bearer <your_access_token>" 7.13.10. updateRepositoryAutoPrunePolicy Updates the auto-prune policy for the repository PUT /api/v1/repository/{repository}/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) The policy configuration that is to be applied to the user namespace Name Description Schema method required The method to use for pruning tags (number_of_tags, creation_date) string value required The value to use for the pruning method (number of tags e.g. 10, time delta e.g. 7d (7 days)) tagPattern optional Tags only matching this pattern will be pruned string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "method": "number_of_tags", "value": "5", "tagPattern": "^test.*", "tagPatternMatches": true }' \ "https://quay-server.example.com/api/v1/repository/<namespace>/<repo_name>/autoprunepolicy/<uuid>" 7.13.11. createUserAutoPrunePolicy Creates the auto-prune policy for the currently logged in user POST /api/v1/user/autoprunepolicy/ Authorizations: oauth2_implicit ( user:admin ) Request body schema (application/json) The policy configuration that is to be applied to the user namespace Name Description Schema method required The method to use for pruning tags (number_of_tags, creation_date) string value required The value to use for the pruning method (number of tags e.g. 10, time delta e.g. 7d (7 days)) tagPattern optional Tags only matching this pattern will be pruned string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern boolean Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://quay-server.example.com/api/v1/user/autoprunepolicy/" \ -H "Authorization: Bearer <your_access_token>" \ -H "Content-Type: application/json" \ -d '{ "method": "number_of_tags", "value": 10, "tagPattern": "v*", "tagPatternMatches": true }' 7.13.12. listUserAutoPrunePolicies Lists the auto-prune policies for the currently logged in user GET /api/v1/user/autoprunepolicy/ Authorizations: oauth2_implicit ( user:admin ) Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/user/autoprunepolicy/" \ -H "Authorization: Bearer <your_access_token>" 7.13.13. getUserAutoPrunePolicy Fetches the auto-prune policy for the currently logged in user GET /api/v1/user/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/user/autoprunepolicy/{policy_uuid}" \ -H "Authorization: Bearer <your_access_token>" 7.13.14. deleteUserAutoPrunePolicy Deletes the auto-prune policy for the currently logged in user DELETE /api/v1/user/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://quay-server.example.com/api/v1/user/autoprunepolicy/<policy_uuid>" \ -H "Authorization: Bearer <your_access_token>" 7.13.15. updateUserAutoPrunePolicy Updates the auto-prune policy for the currently logged in user PUT /api/v1/user/autoprunepolicy/{policy_uuid} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path policy_uuid required The unique ID of the policy string Request body schema (application/json) The policy configuration that is to be applied to the user namespace Name Description Schema method required The method to use for pruning tags (number_of_tags, creation_date) string value required The value to use for the pruning method (number of tags e.g. 10, time delta e.g. 7d (7 days)) tagPattern optional Tags only matching this pattern will be pruned string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern boolean Responses HTTP Code Description Schema 204 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT "https://quay-server.example.com/api/v1/user/autoprunepolicy/<policy_uuid>" \ -H "Authorization: Bearer <your_access_token>" \ -H "Content-Type: application/json" \ -d '{ "method": "number_of_tags", "value": "10", "tagPattern": ".*-old", "tagPatternMatches": true }' 7.14. prototype Manage default permissions added to repositories. 7.14.1. updateOrganizationPrototypePermission Update the role of an existing permission prototype. PUT /api/v1/organization/{orgname}/prototypes/{prototypeid} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path prototypeid required The ID of the prototype string path orgname required The name of the organization string Request body schema (application/json) Description of a the new prototype role Name Description Schema role optional Role that should be applied to the permission string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "role": "write" }' \ https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototypeid> 7.14.2. deleteOrganizationPrototypePermission Delete an existing permission prototype. DELETE /api/v1/organization/{orgname}/prototypes/{prototypeid} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path prototypeid required The ID of the prototype string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototype_id> 7.14.3. createOrganizationPrototypePermission Create a new permission prototype. POST /api/v1/organization/{orgname}/prototypes Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Request body schema (application/json) Description of a new prototype Name Description Schema role required Role that should be applied to the delegate string activating_user optional Repository creating user to whom the rule should apply object delegate required Information about the user or team to which the rule grants access object Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" --data '{ "role": "<admin_read_or_write>", "delegate": { "name": "<username>", "kind": "user" }, "activating_user": { "name": "<robot_name>" } }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes 7.14.4. getOrganizationPrototypePermissions List the existing prototypes for this organization. GET /api/v1/organization/{orgname}/prototypes Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes 7.15. referrers List v2 API referrers 7.15.1. getReferrers List v2 API referrers of an image digest. GET /v2/{organization_name}/{repository_name}/referrers/{digest} Request body schema (application/json) Referrers of an image digest. Type Name Description Schema path orgname required The name of the organization string path repository required The full path of the repository. e.g. namespace/name string path referrers required Looks up the OCI referrers of a manifest under a repository. string 7.16. repository List, create and manage repositories. 7.16.1. createRepo Create a new repository. POST /api/v1/repository Authorizations: oauth2_implicit ( repo:create ) Request body schema (application/json) Description of a new repository Name Description Schema repository required Repository name string visibility required Visibility which the repository will start with string namespace optional Namespace in which the repository should be created. If omitted, the username of the caller is used string description required Markdown encoded description for the repository string repo_kind optional The kind of repository Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "repository": "<new_repository_name>", "visibility": "<public>", "description": "<This is a description of the new repository>." }' \ "https://quay-server.example.com/api/v1/repository" 7.16.2. listRepos Fetch the list of repositories visible to the current user under a variety of situations. GET /api/v1/repository Authorizations: oauth2_implicit ( repo:read ) Query parameters Type Name Description Schema query next_page optional The page token for the page string query repo_kind optional The kind of repositories to return string query popularity optional Whether to include the repository's popularity metric. boolean query last_modified optional Whether to include when the repository was last modified. boolean query public required Adds any repositories visible to the user by virtue of being public boolean query starred required Filters the repositories returned to those starred by the user boolean query namespace required Filters the repositories returned to this namespace string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <ACCESS_TOKEN>" \ "https://quay-server.example.com/api/v1/repository?public=true&starred=false&namespace=<NAMESPACE>" 7.16.3. changeRepoVisibility Change the visibility of a repository. POST /api/v1/repository/{repository}/changevisibility Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Change the visibility for the repository. Name Description Schema visibility required Visibility which the repository will start with string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example Command USD curl -X POST \ -H "Authorization: Bearer <ACCESS_TOKEN>" \ -H "Content-Type: application/json" \ -d '{ "visibility": "private" }' \ "https://quay-server.example.com/api/v1/repository/<NAMESPACE>/<REPO_NAME>/changevisibility" 7.16.4. changeRepoState Change the state of a repository. PUT /api/v1/repository/{repository}/changestate Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Change the state of the repository. Name Description Schema state required Determines whether pushes are allowed. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command 7.16.5. getRepo Fetch the specified repository. GET /api/v1/repository/{repository} Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query includeTags optional Whether to include repository tags boolean query includeStats optional Whether to include action statistics boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET -H "Authorization: Bearer <bearer_token>" "<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>" 7.16.6. updateRepo Update the description in the specified repository. PUT /api/v1/repository/{repository} Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Fields which can be updated in a repository. Name Description Schema description required Markdown encoded description for the repository string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "description": "This is an updated description for the repository." }' \ "https://quay-server.example.com/api/v1/repository/<NAMESPACE>/<REPOSITORY>" 7.16.7. deleteRepository Delete a repository. DELETE /api/v1/repository/{repository} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE -H "Authorization: Bearer <bearer_token>" "<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>" 7.17. repositorynotification List, create and manage repository events/notifications. 7.17.1. testRepoNotification Queues a test notification for this repository. POST /api/v1/repository/{repository}/notification/{uuid}/test Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path uuid required The UUID of the notification string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test 7.17.2. getRepoNotification Get information for the specified notification. GET /api/v1/repository/{repository}/notification/{uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path uuid required The UUID of the notification string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid> 7.17.3. deleteRepoNotification Deletes the specified notification. DELETE /api/v1/repository/{repository}/notification/{uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path uuid required The UUID of the notification string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid> 7.17.4. resetRepositoryNotificationFailures Resets repository notification to 0 failures. POST /api/v1/repository/{repository}/notification/{uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path uuid required The UUID of the notification string Responses HTTP Code Description Schema 204 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid> 7.17.5. createRepoNotification POST /api/v1/repository/{repository}/notification/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Information for creating a notification on a repository Name Description Schema event required The event on which the notification will respond string method required The method of notification (such as email or web callback) string config required JSON config information for the specific method of notification object eventConfig required JSON config information for the specific event of notification object title optional The human-readable title of the notification string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "event": "<event>", "method": "<method>", "config": { "<config_key>": "<config_value>" }, "eventConfig": { "<eventConfig_key>": "<eventConfig_value>" } }' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/ 7.17.6. listRepoNotifications List the notifications for the specified repository. GET /api/v1/repository/{repository}/notification/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET -H "Authorization: Bearer <bearer_token>" -H "Accept: application/json" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification 7.18. robot Manage user and organization robot accounts. 7.18.1. getUserRobots List the available robots for the user. GET /api/v1/user/robots Authorizations: oauth2_implicit ( user:admin ) Query parameters Type Name Description Schema query limit optional If specified, the number of robots to return. integer query token optional If false, the robot's token is not returned. boolean query permissions optional Whether to include repositories and teams in which the robots have permission. boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/user/robots?limit=10&token=false&permissions=true" \ -H "Authorization: Bearer <your_access_token>" 7.18.2. getOrgRobotPermissions Returns the list of repository permissions for the org's robot. GET /api/v1/organization/{orgname}/robots/{robot_shortname}/permissions Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "https://quay-server.example.com/api/v1/organization/<ORGNAME>/robots/<ROBOT_SHORTNAME>/permissions" 7.18.3. regenerateOrgRobotToken Regenerates the token for an organization robot. POST /api/v1/organization/{orgname}/robots/{robot_shortname}/regenerate Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string path orgname required The name of the organization string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate" 7.18.4. getUserRobotPermissions Returns the list of repository permissions for the user's robot. GET /api/v1/user/robots/{robot_shortname}/permissions Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "https://quay-server.example.com/api/v1/user/robots/<ROBOT_SHORTNAME>/permissions" 7.18.5. regenerateUserRobotToken Regenerates the token for a user's robot. POST /api/v1/user/robots/{robot_shortname}/regenerate Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate" 7.18.6. getOrgRobot Returns the organization's robot with the specified name. GET /api/v1/organization/{orgname}/robots/{robot_shortname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "https://quay-server.example.com/api/v1/organization/<ORGNAME>/robots/<ROBOT_SHORTNAME>" 7.18.7. createOrgRobot Create a new robot in the organization. PUT /api/v1/organization/{orgname}/robots/{robot_shortname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string path orgname required The name of the organization string Request body schema (application/json) Optional data for creating a robot Name Description Schema description optional Optional text description for the robot string unstructured_metadata optional Optional unstructured metadata for the robot object Responses HTTP Code Description Schema 201 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>" 7.18.8. deleteOrgRobot Delete an existing organization robot. DELETE /api/v1/organization/{orgname}/robots/{robot_shortname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>" 7.18.9. getOrgRobots List the organization's robots. GET /api/v1/organization/{orgname}/robots Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path orgname required The name of the organization string Query parameters Type Name Description Schema query limit optional If specified, the number of robots to return. integer query token optional If false, the robot's token is not returned. boolean query permissions optional Whether to include repositories and teams in which the robots have permission. boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots" 7.18.10. getUserRobot Returns the user's robot with the specified name. GET /api/v1/user/robots/{robot_shortname} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>" 7.18.11. createUserRobot Create a new user robot with the specified name. PUT /api/v1/user/robots/{robot_shortname} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string Request body schema (application/json) Optional data for creating a robot Name Description Schema description optional Optional text description for the robot string unstructured_metadata optional Optional unstructured metadata for the robot object Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/user/robots/<robot_name>" 7.18.12. deleteUserRobot Delete an existing robot. DELETE /api/v1/user/robots/{robot_shortname} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path robot_shortname required The short name for the robot, without any user or organization prefix string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/user/robots/<robot_shortname>" 7.18.13. Auth Federated Robot Token Return an expiring robot token using the robot identity federation mechanism. GET oauth2/federation/robot/token Authorizations: oauth2_implicit ( robot:auth ) Responses HTTP Code Description Schema 200 Successful authentication and token generation { "token": "string" } 401 Unauthorized: missing or invalid authentication { "error": "string" } Request Body Type Name Description Schema body auth_result required The result of the authentication process, containing information about the robot identity. { "missing": "boolean", "error_message": "string", "context": { "robot": "RobotObject" } } Example command USD curl -X GET "https://quay-server.example.com/oauth2/federation/robot/token" \ -H "Authorization: Bearer <your_access_token>" 7.18.14. createOrgRobotFederation Create a federation configuration for the specified organization robot. POST /api/v1/organization/{orgname}/robots/{robot_shortname}/federation Retrieve the federation configuration for the specified organization robot. Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path orgname + robot_shortname required The name of the organization and the short name for the robot, without any user or organization prefix string Responses HTTP Code Description Schema 201 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError Example command USD curl -X POST "https://quay-server.example.com/api/v1/organization/{orgname}/robots/{robot_shortname}/federation" \ -H "Authorization: Bearer <your_access_token>" \ -H "Content-Type: application/json" 7.19. search Conduct searches against all registry context. 7.19.1. conductRepoSearch Get a list of apps and repositories that match the specified query. GET /api/v1/find/repositories Authorizations: Query parameters Type Name Description Schema query includeUsage optional Whether to include usage metadata boolean query page optional The page. integer query query optional The search query. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/find/repositories?query=<repo_name>&page=1&includeUsage=true" \ -H "Authorization: Bearer <bearer_token>" 7.19.2. conductSearch Get a list of entities and resources that match the specified query. GET /api/v1/find/all Authorizations: oauth2_implicit ( repo:read ) Query parameters Type Name Description Schema query query optional The search query. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/find/all?query=<mysearchterm>" \ -H "Authorization: Bearer <bearer_token>" 7.19.3. getMatchingEntities Get a list of entities that match the specified prefix. GET /api/v1/entities/{prefix} Authorizations: Path parameters Type Name Description Schema path prefix required string Query parameters Type Name Description Schema query includeOrgs optional Whether to include orgs names. boolean query includeTeams optional Whether to include team names. boolean query namespace optional Namespace to use when querying for org entities. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/entities/<prefix>?includeOrgs=<true_or_false>&includeTeams=<true_or_false>&namespace=<namespace>" \ -H "Authorization: Bearer <bearer_token>" 7.20. secscan List and manage repository vulnerabilities and other security information. 7.20.1. getRepoManifestSecurity GET /api/v1/repository/{repository}/manifest/{manifestref}/security Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path manifestref required The digest of the manifest string Query parameters Type Name Description Schema query vulnerabilities optional Include vulnerabilities informations boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ "https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/manifest/<manifest_digest>/security?vulnerabilities=<true_or_false>" 7.21. superuser Superuser API. 7.21.1. createInstallUser Creates a new user. POST /api/v1/superuser/users/ Authorizations: oauth2_implicit ( super:user ) Request body schema (application/json) Data for creating a user Name Description Schema username required The username of the user being created string email optional The email address of the user being created string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" -d '{ "username": "newuser", "email": "[email protected]" }' "https://<quay-server.example.com>/api/v1/superuser/users/" 7.21.2. deleteInstallUser Deletes a user. DELETE /api/v1/superuser/users/{username} Authorizations: oauth2_implicit ( super:user ) Request body schema (application/json) Data for deleting a user Name Description Schema username required The username of the user being deleted string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/superuser/users/{username}" 7.21.3. listAllUsers Returns a list of all users in the system. GET /api/v1/superuser/users/ Authorizations: oauth2_implicit ( super:user ) Query parameters Type Name Description Schema query next_page optional The page token for the page string query limit optional Limit to the number of results to return per page. Max 100. integer query disabled optional If false, only enabled users will be returned. boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/superuser/users/" 7.21.4. listAllLogs List the usage logs for the current system. GET /api/v1/superuser/logs Authorizations: oauth2_implicit ( super:user ) Query parameters Type Name Description Schema query next_page optional The page token for the page string query page optional The page number for the logs integer query endtime optional Latest time to which to get logs (%m/%d/%Y %Z) string query starttime optional Earliest time from which to get logs (%m/%d/%Y %Z) string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "https://<quay_server>/api/v1/superuser/logs?starttime=<start_time>&endtime=<end_time>&page=<page_number>&next_page=<next_page_token>" 7.21.5. listAllOrganizations List the organizations for the current system. GET /api/v1/superuser/organizations Authorizations: oauth2_implicit ( super:user ) Query parameters Type Name Description Schema path name required The name of the organization being managed string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET -H "Authorization: Bearer <bearer_token>" "https://<quay-server.example.com>/api/v1/superuser/organizations/" 7.21.6. createServiceKey POST /api/v1/superuser/keys Authorizations: oauth2_implicit ( super:user ) Request body schema (application/json) Description of creation of a service key Name Description Schema service required The service authenticating with this key string name optional The friendly name of a service key string metadata optional The key/value pairs of this key's metadata object notes optional If specified, the extra notes for the key string expiration required The expiration date as a unix timestamp Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "service": "<service_name>", "expiration": <unix_timestamp> }' \ "<quay_server>/api/v1/superuser/keys" 7.21.7. listServiceKeys GET /api/v1/superuser/keys Authorizations: oauth2_implicit ( super:user ) Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "https://<quay_server>/api/v1/superuser/keys" 7.21.8. changeUserQuotaSuperUser PUT /api/v1/superuser/organization/{namespace}/quota/{quota_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string path quota_id required string Request body schema (application/json) Description of a new organization quota Name Description Schema limit_bytes optional Number of bytes the organization is allowed integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT "https://quay-server.example.com/api/v1/superuser/organization/<namespace>/quota/<quota_id>" \ -H "Authorization: Bearer <ACCESS_TOKEN>" \ -H "Content-Type: application/json" \ -d '{ "limit_bytes": <NEW_QUOTA_LIMIT> }' 7.21.9. deleteUserQuotaSuperUser DELETE /api/v1/superuser/organization/{namespace}/quota/{quota_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string path quota_id required string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://quay-server.example.com/api/v1/superuser/organization/<namespace>/quota/<quota_id>" \ -H "Authorization: Bearer <ACCESS_TOKEN>" 7.21.10. createUserQuotaSuperUser POST /api/v1/superuser/organization/{namespace}/quota Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string Request body schema (application/json) Description of a new organization quota Name Description Schema limit_bytes required Number of bytes the organization is allowed integer Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://quay-server.example.com/api/v1/superuser/organization/<namespace>/quota" \ -H "Authorization: Bearer <ACCESS_TOKEN>" \ -H "Content-Type: application/json" \ -d '{ "limit_bytes": 10737418240 }' 7.21.11. listUserQuotaSuperUser GET /api/v1/superuser/organization/{namespace}/quota Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/superuser/organization/<namespace>/quota" \ -H "Authorization: Bearer <ACCESS_TOKEN>" 7.21.12. changeOrganizationQuotaSuperUser PUT /api/v1/superuser/users/{namespace}/quota/{quota_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string path quota_id required string Request body schema (application/json) Description of a new organization quota Name Description Schema limit_bytes optional Number of bytes the organization is allowed integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT "https://quay-server.example.com/api/v1/superuser/users/<username>/quota/<quota_id>" \ -H "Authorization: Bearer <ACCESS_TOKEN>" \ -H "Content-Type: application/json" \ -d '{ "limit_bytes": <NEW_QUOTA_LIMIT> }' 7.21.13. deleteOrganizationQuotaSuperUser DELETE /api/v1/superuser/users/{namespace}/quota/{quota_id} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string path quota_id required string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://quay-server.example.com/api/v1/superuser/users/<username>/quota/<quota_id>" \ -H "Authorization: Bearer <ACCESS_TOKEN>" 7.21.14. createOrganizationQuotaSuperUser POST /api/v1/superuser/users/{namespace}/quota Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string Request body schema (application/json) Description of a new organization quota Name Description Schema limit_bytes optional Number of bytes the organization is allowed integer Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://quay-server.example.com/api/v1/superuser/users/<username>/quota" \ -H "Authorization: Bearer <ACCESS_TOKEN>" \ -H "Content-Type: application/json" \ -d '{ "limit_bytes": <QUOTA_LIMIT> }' 7.21.15. listOrganizationQuotaSuperUser GET /api/v1/superuser/users/{namespace}/quota Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/superuser/users/<username>/quota" \ -H "Authorization: Bearer <ACCESS_TOKEN>" 7.21.16. changeOrganization Updates information about the specified user. PUT /api/v1/superuser/organizations/{name} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path name required The name of the organizaton being managed string Request body schema (application/json) Description of updates for an existing organization Name Description Schema email optional Organization contact email string invoice_email optional Whether the organization desires to receive emails for invoices boolean invoice_email_address optional The email address at which to receive invoices tag_expiration_s optional The number of seconds for tag expiration integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "email": "<contact_email>", "invoice_email": <boolean_value>, "invoice_email_address": "<invoice_email_address>", "tag_expiration_s": <expiration_seconds> }' \ "https://<quay_server>/api/v1/superuser/organizations/<organization_name>" 7.21.17. deleteOrganization Deletes the specified organization. DELETE /api/v1/superuser/organizations/{name} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path name required The name of the organizaton being managed string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "https://<quay_server>/api/v1/superuser/organizations/<organization_name>" 7.21.18. approveServiceKey POST /api/v1/superuser/approvedkeys/{kid} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path kid required The unique identifier for a service key string Request body schema (application/json) Information for approving service keys Name Description Schema notes optional Optional approval notes string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "notes": "<approval_notes>" }' \ "https://<quay_server>/api/v1/superuser/approvedkeys/<kid>" 7.21.19. deleteServiceKey DELETE /api/v1/superuser/keys/{kid} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path kid required The unique identifier for a service key string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "https://<quay_server>/api/v1/superuser/keys/<kid>" 7.21.20. updateServiceKey PUT /api/v1/superuser/keys/{kid} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path kid required The unique identifier for a service key string Request body schema (application/json) Description of updates for a service key Name Description Schema name optional The friendly name of a service key string metadata optional The key/value pairs of this key's metadata object expiration optional The expiration date as a unix timestamp Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "name": "<service_key_name>", "metadata": {"<key>": "<value>"}, "expiration": <unix_timestamp> }' \ "https://<quay_server>/api/v1/superuser/keys/<kid>" 7.21.21. getServiceKey GET /api/v1/superuser/keys/{kid} Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path kid required The unique identifier for a service key string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "https://<quay_server>/api/v1/superuser/keys/<kid>" 7.21.22. getRepoBuildStatusSuperUser Return the status for the builds specified by the build uuids. GET /api/v1/superuser/{build_uuid}/status Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/superuser/<build_uuid>/status" \ -H "Authorization: Bearer <ACCESS_TOKEN>" 7.21.23. getRepoBuildSuperUser Returns information about a build. GET /api/v1/superuser/{build_uuid}/build Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/superuser/<build_uuid>/build" \ -H "Authorization: Bearer <ACCESS_TOKEN>" 7.21.24. getRepoBuildLogsSuperUser Return the build logs for the build specified by the build uuid. GET /api/v1/superuser/{build_uuid}/logs Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path build_uuid required The UUID of the build string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/superuser/<build_uuid>/logs" \ -H "Authorization: Bearer <ACCESS_TOKEN>" 7.21.25. getRegistrySize GET /api/v1/superuser/registrysize/ Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string Description of a image registry size Name Description Schema size_bytes * optional Number of bytes the organization is allowed integer last_ran integer queued boolean running boolean Responses HTTP Code Description Schema 200 CREATED 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "https://<quay_server>/api/v1/superuser/registrysize/" 7.21.26. postRegistrySize POST /api/v1/superuser/registrysize/ Authorizations: oauth2_implicit ( super:user ) Path parameters Type Name Description Schema path namespace required string Request body schema (application/json) Description of a image registry size Name Description Schema last_ran integer queued boolean running boolean Responses HTTP Code Description Schema 201 CREATED 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://quay-server.example.com/api/v1/superuser/registrysize/" \ -H "Authorization: Bearer <ACCESS_TOKEN>" \ -H "Content-Type: application/json" \ -d '{ "namespace": "<namespace>", "last_ran": 1700000000, "queued": true, "running": false }' 7.22. tag Manage the tags of a repository. 7.22.1. restoreTag Restores a repository tag back to a image in the repository. POST /api/v1/repository/{repository}/tag/{tag}/restore Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path tag required The name of the tag string Request body schema (application/json) Restores a tag to a specific image Name Description Schema manifest_digest required If specified, the manifest digest that should be used string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "manifest_digest": <manifest_digest> }' \ quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore 7.22.2. changeTag Change which image a tag points to or create a new tag. PUT /api/v1/repository/{repository}/tag/{tag} Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path tag required The name of the tag string Request body schema (application/json) Makes changes to a specific tag Name Description Schema manifest_digest optional (If specified) The manifest digest to which the tag should point expiration optional (If specified) The expiration for the image Responses HTTP Code Description Schema 201 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "manifest_digest": "<manifest_digest>" }' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag> 7.22.3. deleteFullTag Delete the specified repository tag. DELETE /api/v1/repository/{repository}/tag/{tag} Authorizations: oauth2_implicit ( repo:write ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string path tag required The name of the tag string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://quay-server.example.com/api/v1/repository/<namespace>/<repo_name>/tag/<tag_name>" \ -H "Authorization: Bearer <your_access_token>" 7.22.4. listRepoTags GET /api/v1/repository/{repository}/tag/ Authorizations: oauth2_implicit ( repo:read ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query onlyActiveTags optional Filter to only active tags. boolean query page optional Page index for the results. Default 1. integer query limit optional Limit to the number of results to return per page. Max 100. integer query filter_tag_name optional Syntax: <op>:<name> Filters the tag names based on the operation.<op> can be 'like' or 'eq'. string query specificTag optional Filters the tags to the specific tag. string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ -H "Accept: application/json" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/ 7.23. team Create, list and manage an organization's teams. 7.23.1. getOrganizationTeamPermissions Returns the list of repository permissions for the org's team. GET /api/v1/organization/{orgname}/team/{teamname}/permissions Authorizations: Path parameters Type Name Description Schema path teamname required The name of the team string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/permissions" 7.23.2. updateOrganizationTeamMember Adds or invites a member to an existing team. PUT /api/v1/organization/{orgname}/team/{teamname}/members/{membername} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path teamname required The name of the team string path membername required The username of the team member string path orgname required The name of the organization string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>" 7.23.3. deleteOrganizationTeamMember Delete a member of a team. DELETE /api/v1/organization/{orgname}/team/{teamname}/members/{membername} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path teamname required The name of the team string path membername required The username of the team member string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>" 7.23.4. getOrganizationTeamMembers Retrieve the list of members for the specified team. GET /api/v1/organization/{orgname}/team/{teamname}/members Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path teamname required The name of the team string path orgname required The name of the organization string Query parameters Type Name Description Schema query includePending optional Whether to include pending members boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members" 7.23.5. inviteTeamMemberEmail Invites an email address to an existing team. PUT /api/v1/organization/{orgname}/team/{teamname}/invite/{email} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path email required string path teamname required string path orgname required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>" 7.23.6. deleteTeamMemberEmailInvite Delete an invite of an email address to join a team. DELETE /api/v1/organization/{orgname}/team/{teamname}/invite/{email} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path email required string path teamname required string path orgname required string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command + USD curl -X DELETE \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>" 7.23.7. updateOrganizationTeam Update the org-wide permission for the specified team. Note This API is also used to create a team. PUT /api/v1/organization/{orgname}/team/{teamname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path teamname required The name of the team string path orgname required The name of the organization string Request body schema (application/json) Description of a team Name Description Schema role required Org wide permissions that should apply to the team string description optional Markdown description for the team string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -k -X PUT -H 'Accept: application/json' -H 'Content-Type: application/json' -H "Authorization: Bearer <bearer_token>" --data '{"role": "creator"}' https://<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name> 7.23.8. deleteOrganizationTeam Delete the specified team. DELETE /api/v1/organization/{orgname}/team/{teamname} Authorizations: oauth2_implicit ( org:admin ) Path parameters Type Name Description Schema path teamname required The name of the team string path orgname required The name of the organization string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE \ -H "Authorization: Bearer <your_access_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>" 7.24. trigger Create, list and manage build triggers. 7.24.1. activateBuildTrigger Activate the specified build trigger. POST /api/v1/repository/{repository}/trigger/{trigger_uuid}/activate Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path trigger_uuid required The UUID of the build trigger string path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Name Description Schema config required Arbitrary json. object pull_robot optional The name of the robot that will be used to pull images. string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/example-trigger-uuid/activate" \ -H "Authorization: Bearer <your_access_token>" \ -H "Content-Type: application/json" \ -d '{ "config": { "branch": "main" }, "pull_robot": "example+robot" }' 7.24.2. listTriggerRecentBuilds List the builds started by the specified trigger. GET /api/v1/repository/{repository}/trigger/{trigger_uuid}/builds Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path trigger_uuid required The UUID of the build trigger string path repository required The full path of the repository. e.g. namespace/name string Query parameters Type Name Description Schema query limit optional The maximum number of builds to return integer Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/example-trigger-uuid/builds?limit=10" \ -H "Authorization: Bearer <your_access_token>" 7.24.3. manuallyStartBuildTrigger Manually start a build from the specified trigger. POST /api/v1/repository/{repository}/trigger/{trigger_uuid}/start Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path trigger_uuid required The UUID of the build trigger string path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Optional run parameters for activating the build trigger Name Description Schema branch_name optional (SCM only) If specified, the name of the branch to build. string commit_sha optional (Custom Only) If specified, the ref/SHA1 used to checkout a git repository. string refs optional (SCM Only) If specified, the ref to build. Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/example-trigger-uuid/start" \ -H "Authorization: Bearer <your_access_token>" \ -H "Content-Type: application/json" \ -d '{ "branch_name": "main", "commit_sha": "abcdef1234567890", "refs": "refs/heads/main" }' 7.24.4. getBuildTrigger Get information for the specified build trigger. GET /api/v1/repository/{repository}/trigger/{trigger_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path trigger_uuid required The UUID of the build trigger string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/example-trigger-uuid" \ -H "Authorization: Bearer <your_access_token>" 7.24.5. updateBuildTrigger Updates the specified build trigger. PUT /api/v1/repository/{repository}/trigger/{trigger_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path trigger_uuid required The UUID of the build trigger string path repository required The full path of the repository. e.g. namespace/name string Request body schema (application/json) Options for updating a build trigger Name Description Schema enabled required Whether the build trigger is enabled boolean Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X PUT "https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/example-trigger-uuid" \ -H "Authorization: Bearer <your_access_token>" \ -H "Content-Type: application/json" \ -d '{"enabled": true}' 7.24.6. deleteBuildTrigger Delete the specified build trigger. DELETE /api/v1/repository/{repository}/trigger/{trigger_uuid} Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path trigger_uuid required The UUID of the build trigger string path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/example-trigger-uuid" \ -H "Authorization: Bearer <your_access_token>" 7.24.7. listBuildTriggers List the triggers for the specified repository. GET /api/v1/repository/{repository}/trigger/ Authorizations: oauth2_implicit ( repo:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/" \ -H "Authorization: Bearer <your_access_token>" 7.25. user Manage the current user. 7.25.1. createStar Star a repository. POST /api/v1/user/starred Authorizations: oauth2_implicit ( repo:read ) Request body schema (application/json) Name Description Schema namespace required Namespace in which the repository belongs string repository required Repository name string Responses HTTP Code Description Schema 201 Successful creation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X POST "https://quay-server.example.com/api/v1/user/starred" \ -H "Authorization: Bearer <your_access_token>" \ -H "Content-Type: application/json" \ -d '{ "namespace": "<namespace>", "repository": "<repository_name>" }' 7.25.2. listStarredRepos List all starred repositories. GET /api/v1/user/starred Authorizations: oauth2_implicit ( user:admin ) Query parameters Type Name Description Schema query next_page optional The page token for the page string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/user/starred?next_page=<next_page_token>" \ -H "Authorization: Bearer <your_access_token>" 7.25.3. getLoggedInUser Get user information for the authenticated user. GET /api/v1/user/ Authorizations: oauth2_implicit ( user:read ) Responses HTTP Code Description Schema 200 Successful invocation UserView 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/user/" \ -H "Authorization: Bearer <your_access_token>" 7.25.4. deleteStar Removes a star from a repository. DELETE /api/v1/user/starred/{repository} Authorizations: oauth2_implicit ( user:admin ) Path parameters Type Name Description Schema path repository required The full path of the repository. e.g. namespace/name string Responses HTTP Code Description Schema 204 Deleted 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X DELETE "https://quay-server.example.com/api/v1/user/starred/namespace/repository-name" \ -H "Authorization: Bearer <your_access_token>" 7.25.5. getUserInformation Get user information for the specified user. GET /api/v1/users/{username} Authorizations: Path parameters Type Name Description Schema path username required string Responses HTTP Code Description Schema 200 Successful invocation 400 Bad Request ApiError 401 Session required ApiError 403 Unauthorized access ApiError 404 Not found ApiError Example command USD curl -X GET "https://quay-server.example.com/api/v1/users/example_user" \ -H "Authorization: Bearer <your_access_token>" 7.26. Definitions 7.26.1. ApiError Name Description Schema status optional Status code of the response. integer type optional Reference to the type of the error. string detail optional Details about the specific instance of the error. string title optional Unique error code to identify the type of error. string error_message optional Deprecated; alias for detail string error_type optional Deprecated; alias for detail string 7.26.2. UserView Name Description Schema verified optional Whether the user's email address has been verified boolean anonymous optional true if this user data represents a guest user boolean email optional The user's email address string avatar optional Avatar data representing the user's icon object organizations optional Information about the organizations in which the user is a member array of object logins optional The list of external login providers against which the user has authenticated array of object can_create_repo optional Whether the user has permission to create repositories boolean preferred_namespace optional If true, the user's namespace is the preferred namespace to display boolean 7.26.3. ViewMirrorConfig Name Description Schema is_enabled optional Used to enable or disable synchronizations. boolean external_reference optional Location of the external repository. string external_registry_username optional Username used to authenticate with external registry. external_registry_password optional Password used to authenticate with external registry. sync_start_date optional Determines the time this repository is ready for synchronization. string sync_interval optional Number of seconds after next_start_date to begin synchronizing. integer robot_username optional Username of robot which will be used for image pushes. string root_rule optional A list of glob-patterns used to determine which tags should be synchronized. object external_registry_config optional object 7.26.4. ApiErrorDescription Name Description Schema type optional A reference to the error type resource string title optional The title of the error. Can be used to uniquely identify the kind of error. string description optional A more detailed description of the error that may include help for fixing the issue. string | [
"curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"title\": \"MyAppToken\" }' \"http://quay-server.example.com/api/v1/user/apptoken\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"http://quay-server.example.com/api/v1/user/apptoken\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"http://quay-server.example.com/api/v1/user/apptoken/<token_uuid>\"",
"curl -X DELETE -H \"Authorization: Bearer <access_token>\" \"http://quay-server.example.com/api/v1/user/apptoken/<token_uuid>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/discovery?query=true\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/error/<error_type>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X POST \"https://<quay-server.example.com>/api/v1/messages\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"message\": { \"content\": \"Hi\", \"media_type\": \"text/plain\", \"severity\": \"info\" } }'",
"curl -X GET \"https://<quay-server.example.com>/api/v1/messages\" -H \"Authorization: Bearer <access_token>\"",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/message/<uuid>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/user/aggregatelogs?performer=<username>&starttime=<MM/DD/YYYY>&endtime=<MM/DD/YYYY>\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"<MM/DD/YYYY>\", \"endtime\": \"<MM/DD/YYYY>\", \"callback_email\": \"[email protected]\" }' \"http://<quay-server.example.com>/api/v1/user/exportlogs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/user/logs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/organization/{orgname}/aggregatelogs\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"<MM/DD/YYYY>\", \"endtime\": \"<MM/DD/YYYY>\", \"callback_email\": \"[email protected]\" }' \"http://<quay-server.example.com>/api/v1/organization/{orgname}/exportlogs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://<quay-server.example.com>/api/v1/organization/{orgname}/logs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/repository/<repository_name>/<namespace>/aggregatelogs?starttime=2024-01-01&endtime=2024-06-18\"\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"2024-01-01\", \"endtime\": \"2024-06-18\", \"callback_url\": \"http://your-callback-url.example.com\" }' \"http://<quay-server.example.com>/api/v1/repository/{repository}/exportlogs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://<quay-server.example.com>/api/v1/repository/{repository}/logs\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<label_id>",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<labelid>",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"key\": \"<key>\", \"value\": \"<value>\", \"media_type\": \"<media_type>\" }' https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>",
"curl -X POST \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror/sync-cancel\" \\",
"curl -X POST \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror/sync-now\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror\" -H \"Authorization: Bearer <access_token>\"",
"curl -X PUT \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"is_enabled\": <false>, 1 \"external_reference\": \"<external_reference>\", \"external_registry_username\": \"<external_registry_username>\", \"external_registry_password\": \"<external_registry_password>\", \"sync_start_date\": \"<sync_start_date>\", \"sync_interval\": <sync_interval>, \"robot_username\": \"<robot_username>\", \"root_rule\": { \"rule\": \"<rule>\", \"rule_type\": \"<rule_type>\" } }'",
"curl -X POST \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repo>/mirror\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"is_enabled\": <is_enabled>, \"external_reference\": \"<external_reference>\", \"external_registry_username\": \"<external_registry_username>\", \"external_registry_password\": \"<external_registry_password>\", \"sync_start_date\": \"<sync_start_date>\", \"sync_interval\": <sync_interval>, \"robot_username\": \"<robot_username>\", \"root_rule\": { \"rule\": \"<rule>\", \"rule_type\": \"<rule_type>\" } }'",
"curl -X GET \"https://<quay-server.example.com>/api/v1/user/quota\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit/<limit_id>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit/<limit_id>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"type\": \"<type>\", \"threshold_percent\": <threshold_percent> }'",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit/<limit_id>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X POST \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": 21474836480, \"type\": \"Reject\", 1 \"threshold_percent\": 90 2 }'",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>/limit\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/user/quota/{quota_id}/limit/{limit_id}\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/user/quota/{quota_id}/limit\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>\" -H \"Authorization: Bearer <access_token>\"S",
"curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": <limit_in_bytes> }'",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X POST \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": 10737418240, \"limits\": \"10 Gi\" }'",
"curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://<quay-server.example.com>/api/v1/organization/<organization_name>/quota",
"curl -X GET \"https://<quay-server.example.com>/api/v1/user/quota/{quota_id}\" -H \"Authorization: Bearer <access_token>\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"<new_organization_name>\" }' \"https://<quay-server.example.com>/api/v1/organization/\"",
"curl -X POST \"https://<quay-server.example.com>/api/v1/organization/{orgname}/validateproxycache\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"upstream_registry\": \"<upstream_registry>\" }'",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/{orgname}/collaborators\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/applications/<client_id>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X PUT \"https://quay-server.example.com/api/v1/organization/test/applications/12345\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"Updated Application Name\", \"redirect_uri\": \"https://example.com/oauth/callback\", \"application_uri\": \"https://example.com\", \"description\": \"Updated description for the application\", \"avatar_email\": \"[email protected]\" }'",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/organization/{orgname}/applications/{client_id}\" -H \"Authorization: Bearer <access_token>\"",
"curl -X POST \"https://<quay-server.example.com>/api/v1/organization/<orgname>/applications\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"<app_name>\", \"redirect_uri\": \"<redirect_uri>\", \"application_uri\": \"<application_uri>\", \"description\": \"<app_description>\", \"avatar_email\": \"<avatar_email>\" }'",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/applications\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/{orgname}/proxycache\" -H \"Authorization: Bearer <access_token>\"",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/organization/{orgname}/proxycache\" -H \"Authorization: Bearer <access_token>\"",
"curl -X POST \"https://<quay-server.example.com>/api/v1/organization/<orgname>/proxycache\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"upstream_registry\": \"<upstream_registry>\" }'",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/members/<membername>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X DELETE \"https://<quay-server.example.com>/api/v1/organization/<orgname>/members/<membername>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/organization/<orgname>/members\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>\"",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>\"",
"curl -X GET \"https://<quay-server.example.com>/api/v1/app/<client_id>\" -H \"Authorization: Bearer <access_token>\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"https://quay-server.example.com/api/v1/repository/<repository_path>/permissions/user/<username>/transitive\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"https://quay-server.example.com/api/v1/repository/<repository_path>/permissions/user/<username>\"",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{\"role\": \"admin\"}' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/team/<teamname>\"",
"curl -X PUT -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"role\": \"<role>\"}' \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/team/<teamname>\"",
"curl -X DELETE -H \"Authorization: Bearer <access_token>\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/team/<teamname>\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/permissions/team/\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>/",
"curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\", \"value\": 10}' http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/",
"curl -X GET \"https://quay-server.example.com/api/v1/organization/example_org/autoprunepolicy/\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X GET -H \"Authorization: Bearer <access_token>\" http://<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/<policy_uuid>",
"curl -X DELETE \"https://quay-server.example.com/api/v1/organization/example_org/autoprunepolicy/example_policy_uuid\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"creation_date\", \"value\": \"4d\", \"tagPattern\": \"^v*\", \"tagPatternMatches\": true }' \"<quay-server.example.com>/api/v1/organization/<organization_name>/autoprunepolicy/<uuid>\"",
"curl -X POST -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{\"method\": \"number_of_tags\",\"value\": 2}' http://<quay-server.example.com>/api/v1/repository/<organization_name>/<repository_name>/autoprunepolicy/",
"curl -X GET \"https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/autoprunepolicy/\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X GET \"https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/autoprunepolicy/123e4567-e89b-12d3-a456-426614174000\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X DELETE \"https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/autoprunepolicy/123e4567-e89b-12d3-a456-426614174000\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"number_of_tags\", \"value\": \"5\", \"tagPattern\": \"^test.*\", \"tagPatternMatches\": true }' \"https://quay-server.example.com/api/v1/repository/<namespace>/<repo_name>/autoprunepolicy/<uuid>\"",
"curl -X POST \"https://quay-server.example.com/api/v1/user/autoprunepolicy/\" -H \"Authorization: Bearer <your_access_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"number_of_tags\", \"value\": 10, \"tagPattern\": \"v*\", \"tagPatternMatches\": true }'",
"curl -X GET \"https://quay-server.example.com/api/v1/user/autoprunepolicy/\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X GET \"https://quay-server.example.com/api/v1/user/autoprunepolicy/{policy_uuid}\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X DELETE \"https://quay-server.example.com/api/v1/user/autoprunepolicy/<policy_uuid>\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X PUT \"https://quay-server.example.com/api/v1/user/autoprunepolicy/<policy_uuid>\" -H \"Authorization: Bearer <your_access_token>\" -H \"Content-Type: application/json\" -d '{ \"method\": \"number_of_tags\", \"value\": \"10\", \"tagPattern\": \".*-old\", \"tagPatternMatches\": true }'",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"write\" }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototypeid>",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototype_id>",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"<admin_read_or_write>\", \"delegate\": { \"name\": \"<username>\", \"kind\": \"user\" }, \"activating_user\": { \"name\": \"<robot_name>\" } }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"repository\": \"<new_repository_name>\", \"visibility\": \"<public>\", \"description\": \"<This is a description of the new repository>.\" }' \"https://quay-server.example.com/api/v1/repository\"",
"curl -X GET -H \"Authorization: Bearer <ACCESS_TOKEN>\" \"https://quay-server.example.com/api/v1/repository?public=true&starred=false&namespace=<NAMESPACE>\"",
"curl -X POST -H \"Authorization: Bearer <ACCESS_TOKEN>\" -H \"Content-Type: application/json\" -d '{ \"visibility\": \"private\" }' \"https://quay-server.example.com/api/v1/repository/<NAMESPACE>/<REPO_NAME>/changevisibility\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"description\": \"This is an updated description for the repository.\" }' \"https://quay-server.example.com/api/v1/repository/<NAMESPACE>/<REPOSITORY>\"",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid>",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"event\": \"<event>\", \"method\": \"<method>\", \"config\": { \"<config_key>\": \"<config_value>\" }, \"eventConfig\": { \"<eventConfig_key>\": \"<eventConfig_value>\" } }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification",
"curl -X GET \"https://quay-server.example.com/api/v1/user/robots?limit=10&token=false&permissions=true\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://quay-server.example.com/api/v1/organization/<ORGNAME>/robots/<ROBOT_SHORTNAME>/permissions\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://quay-server.example.com/api/v1/user/robots/<ROBOT_SHORTNAME>/permissions\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://quay-server.example.com/api/v1/organization/<ORGNAME>/robots/<ROBOT_SHORTNAME>\"",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>\"",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/user/robots/<robot_name>\"",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"",
"curl -X GET \"https://quay-server.example.com/oauth2/federation/robot/token\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X POST \"https://quay-server.example.com/api/v1/organization/{orgname}/robots/{robot_shortname}/federation\" -H \"Authorization: Bearer <your_access_token>\" -H \"Content-Type: application/json\"",
"curl -X GET \"https://quay-server.example.com/api/v1/find/repositories?query=<repo_name>&page=1&includeUsage=true\" -H \"Authorization: Bearer <bearer_token>\"",
"curl -X GET \"https://quay-server.example.com/api/v1/find/all?query=<mysearchterm>\" -H \"Authorization: Bearer <bearer_token>\"",
"curl -X GET \"https://quay-server.example.com/api/v1/entities/<prefix>?includeOrgs=<true_or_false>&includeTeams=<true_or_false>&namespace=<namespace>\" -H \"Authorization: Bearer <bearer_token>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/manifest/<manifest_digest>/security?vulnerabilities=<true_or_false>\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"username\": \"newuser\", \"email\": \"[email protected]\" }' \"https://<quay-server.example.com>/api/v1/superuser/users/\"",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/users/{username}\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/users/\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay_server>/api/v1/superuser/logs?starttime=<start_time>&endtime=<end_time>&page=<page_number>&next_page=<next_page_token>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/organizations/\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"service\": \"<service_name>\", \"expiration\": <unix_timestamp> }' \"<quay_server>/api/v1/superuser/keys\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay_server>/api/v1/superuser/keys\"",
"curl -X PUT \"https://quay-server.example.com/api/v1/superuser/organization/<namespace>/quota/<quota_id>\" -H \"Authorization: Bearer <ACCESS_TOKEN>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": <NEW_QUOTA_LIMIT> }'",
"curl -X DELETE \"https://quay-server.example.com/api/v1/superuser/organization/<namespace>/quota/<quota_id>\" -H \"Authorization: Bearer <ACCESS_TOKEN>\"",
"curl -X POST \"https://quay-server.example.com/api/v1/superuser/organization/<namespace>/quota\" -H \"Authorization: Bearer <ACCESS_TOKEN>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": 10737418240 }'",
"curl -X GET \"https://quay-server.example.com/api/v1/superuser/organization/<namespace>/quota\" -H \"Authorization: Bearer <ACCESS_TOKEN>\"",
"curl -X PUT \"https://quay-server.example.com/api/v1/superuser/users/<username>/quota/<quota_id>\" -H \"Authorization: Bearer <ACCESS_TOKEN>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": <NEW_QUOTA_LIMIT> }'",
"curl -X DELETE \"https://quay-server.example.com/api/v1/superuser/users/<username>/quota/<quota_id>\" -H \"Authorization: Bearer <ACCESS_TOKEN>\"",
"curl -X POST \"https://quay-server.example.com/api/v1/superuser/users/<username>/quota\" -H \"Authorization: Bearer <ACCESS_TOKEN>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": <QUOTA_LIMIT> }'",
"curl -X GET \"https://quay-server.example.com/api/v1/superuser/users/<username>/quota\" -H \"Authorization: Bearer <ACCESS_TOKEN>\"",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"email\": \"<contact_email>\", \"invoice_email\": <boolean_value>, \"invoice_email_address\": \"<invoice_email_address>\", \"tag_expiration_s\": <expiration_seconds> }' \"https://<quay_server>/api/v1/superuser/organizations/<organization_name>\"",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"https://<quay_server>/api/v1/superuser/organizations/<organization_name>\"",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"notes\": \"<approval_notes>\" }' \"https://<quay_server>/api/v1/superuser/approvedkeys/<kid>\"",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"https://<quay_server>/api/v1/superuser/keys/<kid>\"",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"<service_key_name>\", \"metadata\": {\"<key>\": \"<value>\"}, \"expiration\": <unix_timestamp> }' \"https://<quay_server>/api/v1/superuser/keys/<kid>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay_server>/api/v1/superuser/keys/<kid>\"",
"curl -X GET \"https://quay-server.example.com/api/v1/superuser/<build_uuid>/status\" -H \"Authorization: Bearer <ACCESS_TOKEN>\"",
"curl -X GET \"https://quay-server.example.com/api/v1/superuser/<build_uuid>/build\" -H \"Authorization: Bearer <ACCESS_TOKEN>\"",
"curl -X GET \"https://quay-server.example.com/api/v1/superuser/<build_uuid>/logs\" -H \"Authorization: Bearer <ACCESS_TOKEN>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay_server>/api/v1/superuser/registrysize/\"",
"curl -X POST \"https://quay-server.example.com/api/v1/superuser/registrysize/\" -H \"Authorization: Bearer <ACCESS_TOKEN>\" -H \"Content-Type: application/json\" -d '{ \"namespace\": \"<namespace>\", \"last_ran\": 1700000000, \"queued\": true, \"running\": false }'",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": <manifest_digest> }' quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore",
"curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": \"<manifest_digest>\" }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag>",
"curl -X DELETE \"https://quay-server.example.com/api/v1/repository/<namespace>/<repo_name>/tag/<tag_name>\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/",
"curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/permissions\"",
"curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"",
"If the user is merely invited to join the team, then the invite is removed instead.",
"curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"",
"curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members\"",
"curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"",
"curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"",
"curl -k -X PUT -H 'Accept: application/json' -H 'Content-Type: application/json' -H \"Authorization: Bearer <bearer_token>\" --data '{\"role\": \"creator\"}' https://<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>",
"curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>\"",
"curl -X POST \"https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/example-trigger-uuid/activate\" -H \"Authorization: Bearer <your_access_token>\" -H \"Content-Type: application/json\" -d '{ \"config\": { \"branch\": \"main\" }, \"pull_robot\": \"example+robot\" }'",
"curl -X GET \"https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/example-trigger-uuid/builds?limit=10\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X POST \"https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/example-trigger-uuid/start\" -H \"Authorization: Bearer <your_access_token>\" -H \"Content-Type: application/json\" -d '{ \"branch_name\": \"main\", \"commit_sha\": \"abcdef1234567890\", \"refs\": \"refs/heads/main\" }'",
"curl -X GET \"https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/example-trigger-uuid\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X PUT \"https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/example-trigger-uuid\" -H \"Authorization: Bearer <your_access_token>\" -H \"Content-Type: application/json\" -d '{\"enabled\": true}'",
"curl -X DELETE \"https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/example-trigger-uuid\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X GET \"https://quay-server.example.com/api/v1/repository/example_namespace/example_repo/trigger/\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X POST \"https://quay-server.example.com/api/v1/user/starred\" -H \"Authorization: Bearer <your_access_token>\" -H \"Content-Type: application/json\" -d '{ \"namespace\": \"<namespace>\", \"repository\": \"<repository_name>\" }'",
"curl -X GET \"https://quay-server.example.com/api/v1/user/starred?next_page=<next_page_token>\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X GET \"https://quay-server.example.com/api/v1/user/\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X DELETE \"https://quay-server.example.com/api/v1/user/starred/namespace/repository-name\" -H \"Authorization: Bearer <your_access_token>\"",
"curl -X GET \"https://quay-server.example.com/api/v1/users/example_user\" -H \"Authorization: Bearer <your_access_token>\""
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_api_guide/red_hat_quay_application_programming_interface_api |
Release notes for Red Hat build of OpenJDK 11.0.25 | Release notes for Red Hat build of OpenJDK 11.0.25 Red Hat build of OpenJDK 11 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.25/index |
Chapter 10. Installing a cluster on Azure in a restricted network with user-provisioned infrastructure | Chapter 10. Installing a cluster on Azure in a restricted network with user-provisioned infrastructure In OpenShift Container Platform, you can install a cluster on Microsoft Azure by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you must use that computer to complete all installation steps. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you have manually created long-term credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 10.1. About installations in restricted networks In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 10.1.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 10.1.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 10.2. Configuring your Azure project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 10.2.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 44 20 per region A default cluster requires 44 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap and control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the compute machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 44 vCPUs. The bootstrap node VM, which uses 8 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage 10.2.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. You can view Azure's DNS solution by visiting this example for creating DNS zones . 10.2.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 10.2.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 10.2.5. Required Azure roles An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. Before you create the identity, verify that your environment meets the following requirements: The Azure account that you use to create the identity is assigned the User Access Administrator and Contributor roles. These roles are required when: Creating a service principal or user-assigned managed identity. Enabling a system-assigned managed identity on a virtual machine. If you are going to use a service principal to complete the installation, verify that the Azure account that you use to create the identity is assigned the microsoft.directory/servicePrincipals/createAsOwner permission in Microsoft Entra ID. To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 10.2.6. Required Azure permissions for user-provisioned infrastructure The installation program requires access to an Azure service principal or managed identity with the necessary permissions to deploy the cluster and to maintain its daily operation. These permissions must be granted to the Azure subscription that is associated with the identity. The following options are available to you: You can assign the identity the Contributor and User Access Administrator roles. Assigning these roles is the quickest way to grant all of the required permissions. For more information about assigning roles, see the Azure documentation for managing access to Azure resources using the Azure portal . If your organization's security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions. The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure. Example 10.1. Required permissions for creating authorization resources Microsoft.Authorization/policies/audit/action Microsoft.Authorization/policies/auditIfNotExists/action Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/write Example 10.2. Required permissions for creating compute resources Microsoft.Compute/images/read Microsoft.Compute/images/write Microsoft.Compute/images/delete Microsoft.Compute/availabilitySets/read Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/galleries/images/write Microsoft.Compute/galleries/read Microsoft.Compute/galleries/write Microsoft.Compute/snapshots/read Microsoft.Compute/snapshots/write Microsoft.Compute/snapshots/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/deallocate/action Example 10.3. Required permissions for creating identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Example 10.4. Required permissions for creating network resources Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/backendAddressPools/read Microsoft.Network/loadBalancers/backendAddressPools/write Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/securityRules/delete Microsoft.Network/networkSecurityGroups/securityRules/read Microsoft.Network/networkSecurityGroups/securityRules/write Microsoft.Network/networkSecurityGroups/write Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/SOA/read Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/write Microsoft.Network/privateDnsZones/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/virtualNetworks/join/action Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Network/virtualNetworks/write Example 10.5. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/InProgress/action Microsoft.Resourcehealth/healthevent/Pending/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 10.6. Required permissions for creating a resource group Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourcegroups/write Example 10.7. Required permissions for creating resource tags Microsoft.Resources/tags/write Example 10.8. Required permissions for creating storage resources Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Example 10.9. Required permissions for creating deployments Microsoft.Resources/deployments/read Microsoft.Resources/deployments/write Microsoft.Resources/deployments/validate/action Microsoft.Resources/deployments/operationstatuses/read Example 10.10. Optional permissions for creating compute resources Microsoft.Compute/availabilitySets/delete Microsoft.Compute/availabilitySets/write Example 10.11. Optional permissions for creating marketplace virtual machine resources Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write Example 10.12. Optional permissions for enabling user-managed encryption Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/diskEncryptionSets/write Microsoft.Compute/diskEncryptionSets/delete Microsoft.KeyVault/vaults/read Microsoft.KeyVault/vaults/write Microsoft.KeyVault/vaults/delete Microsoft.KeyVault/vaults/deploy/action Microsoft.KeyVault/vaults/keys/read Microsoft.KeyVault/vaults/keys/write Microsoft.Features/providers/features/register/action The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. Example 10.13. Required permissions for deleting authorization resources Microsoft.Authorization/roleAssignments/delete Example 10.14. Required permissions for deleting compute resources Microsoft.Compute/disks/delete Microsoft.Compute/galleries/delete Microsoft.Compute/galleries/images/delete Microsoft.Compute/galleries/images/versions/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/images/delete Example 10.15. Required permissions for deleting identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/delete Example 10.16. Required permissions for deleting network resources Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Microsoft.Network/loadBalancers/delete Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/delete Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete Microsoft.Network/publicIPAddresses/delete Microsoft.Network/virtualNetworks/delete Example 10.17. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 10.18. Required permissions for deleting a resource group Microsoft.Resources/subscriptions/resourcegroups/delete Example 10.19. Required permissions for deleting storage resources Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Note To install OpenShift Container Platform on Azure, you must scope the permissions related to resource group creation to your subscription. After the resource group is created, you can scope the rest of the permissions to the created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster. 10.2.7. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. If you want to use a custom role, you have created a custom role with the required permissions listed in the Required Azure permissions for user-provisioned infrastructure section. Procedure Log in to the Azure CLI: USD az login If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role <role_name> \ 1 --name <service_principal> \ 2 --scopes /subscriptions/<subscription_id> 3 1 Defines the role name. You can use the Contributor role, or you can specify a custom role which contains the necessary permissions. 2 Defines the service principal name. 3 Specifies the subscription ID. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. If you applied the Contributor role to your service principal, assign the User Administrator Access role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 1 Specify the appId parameter value for your service principal. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 10.2.8. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) mexicocentral (Mexico Central) newzealandnorth (New Zealand North) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) spaincentral (Spain Central) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 10.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 10.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 10.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 10.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 10.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 10.3.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 10.20. Machine types based on 64-bit x86 architecture standardBSFamily standardDADSv5Family standardDASv4Family standardDASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHCSFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 10.3.4. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 10.21. Machine types based on 64-bit ARM architecture standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 10.4. Using the Azure Marketplace offering Using the Azure Marketplace offering lets you deploy an OpenShift Container Platform cluster, which is billed on pay-per-use basis (hourly, per core) through Azure, while still being supported directly by Red Hat. To deploy an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700 Note Regardless of the version of OpenShift Container Platform that you install, the correct version of the Azure Marketplace image to use is 4.13. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. If you use the Azure Resource Manager (ARM) template to deploy your worker nodes: Update storageProfile.imageReference by deleting the id parameter and adding the offer , publisher , sku , and version parameters by using the values from your offer. Specify a plan for the virtual machines (VMs). Example 06_workers.json ARM template with an updated storageProfile.imageReference object and a specified plan ... "plan" : { "name": "rh-ocp-worker", "product": "rh-ocp-worker", "publisher": "redhat" }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { ... "storageProfile": { "imageReference": { "offer": "rh-ocp-worker", "publisher": "redhat", "sku": "rh-ocp-worker", "version": "413.92.2023101700" } ... } ... } 10.4.1. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 10.4.2. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 10.5. Creating the installation files for Azure To install OpenShift Container Platform on Microsoft Azure using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 10.5.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 10.5.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VNet to install the cluster under the platform.azure field: networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4 1 Replace <vnet_resource_group> with the resource group name that contains the existing virtual network (VNet). 2 Replace <vnet> with the existing virtual network name. 3 Replace <control_plane_subnet> with the existing subnet name to deploy the control plane machines. 4 Replace <compute_subnet> with the existing subnet name to deploy compute machines. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Important Azure Firewall does not work seamlessly with Azure Public Load balancers. Thus, when using Azure Firewall for restricting internet access, the publish field in install-config.yaml should be set to Internal . Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. 10.5.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 10.5.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into, for example centralus . This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the public DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 10.5.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 10.6. Creating the Azure resource group You must create a Microsoft Azure resource group and an identity for that resource group. These are both used during the installation of your OpenShift Container Platform cluster on Azure. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} Create an Azure identity for the resource group: USD az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity This is used to grant the required access to Operators in your cluster. For example, this allows the Ingress Operator to create a public IP and its load balancer. You must assign the Azure identity to a role. Grant the Contributor role to the Azure identity: Export the following variables required by the Azure role assignment: USD export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv` USD export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv` Assign the Contributor role to the identity: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role 'Contributor' --scope "USD{RESOURCE_GROUP_ID}" Note If you want to assign a custom role with all the required permissions to the identity, run the following command: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role <custom_role> \ 1 --scope "USD{RESOURCE_GROUP_ID}" 1 Specifies the custom role name. 10.7. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally. You must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Export the URL of the RHCOS VHD to an environment variable: USD export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.<architecture>."rhel-coreos-extensions"."azure-disk".url'` where: <architecture> Specifies the architecture, valid values include x86_64 or aarch64 . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Create the storage container for the VHD: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} Copy the local VHD to a blob: USD az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob "rhcos.vhd" --destination-container vhd --source-uri "USD{VHD_URL}" Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 10.8. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure's DNS solution is used, so you will create a new public DNS zone for external (internet) visibility and a private DNS zone for internal cluster resolution. Note The public DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the public DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new public DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a public DNS zone that already exists. Create the private DNS zone in the same resource group as the rest of this deployment: USD az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can learn more about configuring a public DNS zone in Azure by visiting that section. 10.9. Creating a VNet in Azure You must create a virtual network (VNet) in Microsoft Azure for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Link the VNet template to the private DNS zone: USD az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v "USD{INFRA_ID}-vnet" -e false 10.9.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 10.22. 01_vnet.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "addressPrefix" : "10.0.0.0/16", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetPrefix" : "10.0.0.0/24", "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", "nodeSubnetPrefix" : "10.0.1.0/24", "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/virtualNetworks", "name" : "[variables('virtualNetworkName')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]" ], "properties" : { "addressSpace" : { "addressPrefixes" : [ "[variables('addressPrefix')]" ] }, "subnets" : [ { "name" : "[variables('masterSubnetName')]", "properties" : { "addressPrefix" : "[variables('masterSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } }, { "name" : "[variables('nodeSubnetName')]", "properties" : { "addressPrefix" : "[variables('nodeSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } } ] } }, { "type" : "Microsoft.Network/networkSecurityGroups", "name" : "[variables('clusterNsgName')]", "apiVersion" : "2018-10-01", "location" : "[variables('location')]", "properties" : { "securityRules" : [ { "name" : "apiserver_in", "properties" : { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "6443", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 101, "direction" : "Inbound" } } ] } } ] } 10.10. Deploying the RHCOS cluster image for the Azure infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters storageAccount="USD{CLUSTER_NAME}sa" \ 3 --parameters architecture="<architecture>" 4 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of your Azure storage account. 4 Specify the system architecture. Valid values are x64 (default) or Arm64 . 10.10.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 10.23. 02_storage.json ARM template { "USDschema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "architecture": { "type": "string", "metadata": { "description": "The architecture of the Virtual Machines" }, "defaultValue": "x64", "allowedValues": [ "Arm64", "x64" ] }, "baseName": { "type": "string", "minLength": 1, "metadata": { "description": "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "storageAccount": { "type": "string", "metadata": { "description": "The Storage Account name" } }, "vhdBlobURL": { "type": "string", "metadata": { "description": "URL pointing to the blob where the VHD to be used to create master and worker machines is located" } } }, "variables": { "location": "[resourceGroup().location]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName": "[parameters('baseName')]", "imageNameGen2": "[concat(parameters('baseName'), '-gen2')]", "imageRelease": "1.0.0" }, "resources": [ { "apiVersion": "2021-10-01", "type": "Microsoft.Compute/galleries", "name": "[variables('galleryName')]", "location": "[variables('location')]", "resources": [ { "apiVersion": "2021-10-01", "type": "images", "name": "[variables('imageName')]", "location": "[variables('location')]", "dependsOn": [ "[variables('galleryName')]" ], "properties": { "architecture": "[parameters('architecture')]", "hyperVGeneration": "V1", "identifier": { "offer": "rhcos", "publisher": "RedHat", "sku": "basic" }, "osState": "Generalized", "osType": "Linux" }, "resources": [ { "apiVersion": "2021-10-01", "type": "versions", "name": "[variables('imageRelease')]", "location": "[variables('location')]", "dependsOn": [ "[variables('imageName')]" ], "properties": { "publishingProfile": { "storageAccountType": "Standard_LRS", "targetRegions": [ { "name": "[variables('location')]", "regionalReplicaCount": "1" } ] }, "storageProfile": { "osDiskImage": { "source": { "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]", "uri": "[parameters('vhdBlobURL')]" } } } } } ] }, { "apiVersion": "2021-10-01", "type": "images", "name": "[variables('imageNameGen2')]", "location": "[variables('location')]", "dependsOn": [ "[variables('galleryName')]" ], "properties": { "architecture": "[parameters('architecture')]", "hyperVGeneration": "V2", "identifier": { "offer": "rhcos-gen2", "publisher": "RedHat-gen2", "sku": "gen2" }, "osState": "Generalized", "osType": "Linux" }, "resources": [ { "apiVersion": "2021-10-01", "type": "versions", "name": "[variables('imageRelease')]", "location": "[variables('location')]", "dependsOn": [ "[variables('imageNameGen2')]" ], "properties": { "publishingProfile": { "storageAccountType": "Standard_LRS", "targetRegions": [ { "name": "[variables('location')]", "regionalReplicaCount": "1" } ] }, "storageProfile": { "osDiskImage": { "source": { "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]", "uri": "[parameters('vhdBlobURL')]" } } } } } ] } ] } ] } 10.11. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 10.11.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 10.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 10.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 10.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 10.12. Creating networking and load balancing components in Azure You must configure networking and load balancing in Microsoft Azure for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters privateDNSZoneName="USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The name of the private DNS zone. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record in the public zone for the API public load balancer. The USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the public DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Create the api DNS record in a new public zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing public zone, you can create the api DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 10.12.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 10.24. 03_infra.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "privateDNSZoneName" : { "type" : "string", "metadata" : { "description" : "Name of the private DNS zone" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]", "masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]", "skuName": "Standard" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('masterPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('masterPublicIpAddressName')]" } } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('masterLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "dependsOn" : [ "[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]" ], "properties" : { "frontendIPConfigurations" : [ { "name" : "public-lb-ip-v4", "properties" : { "publicIPAddress" : { "id" : "[variables('masterPublicIpAddressID')]" } } } ], "backendAddressPools" : [ { "name" : "[variables('masterLoadBalancerName')]" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip-v4')]" }, "backendAddressPool" : { "id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, "protocol" : "Tcp", "loadDistribution" : "Default", "idleTimeoutInMinutes" : 30, "frontendPort" : 6443, "backendPort" : 6443, "probe" : { "id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('internalLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "frontendIPConfigurations" : [ { "name" : "internal-lb-ip", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "privateIPAddressVersion" : "IPv4" } } ], "backendAddressPools" : [ { "name" : "internal-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 6443, "backendPort" : 6443, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]" } } }, { "name" : "sint", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 22623, "backendPort" : 22623, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } }, { "name" : "sint-probe", "properties" : { "protocol" : "Https", "port" : 22623, "requestPath": "/healthz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api-int')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } } ] } 10.13. Creating the bootstrap machine in Azure You must create the bootstrap machine in Microsoft Azure to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the bootstrap URL variable: USD bootstrap_url_expiry=`date -u -d "10 hours" '+%Y-%m-%dT%H:%MZ'` USD export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv` Export the bootstrap ignition variable: USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The bootstrap Ignition content for the bootstrap cluster. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 10.13.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 10.25. 04_bootstrap.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "bootstrapIgnition" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Bootstrap ignition content for the bootstrap cluster" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "bootstrapVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the Bootstrap Virtual Machine" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "vmName" : "[concat(parameters('baseName'), '-bootstrap')]", "nicName" : "[concat(variables('vmName'), '-nic')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "clusterNsgName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]", "sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('sshPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "Standard" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('sshPublicIpAddressName')]" } } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[variables('nicName')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" ], "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" }, "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmName')]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('bootstrapVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmName')]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('bootstrapIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmName'),'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : 100 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" } ] } } }, { "apiVersion" : "2018-06-01", "type": "Microsoft.Network/networkSecurityGroups/securityRules", "name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" ], "properties": { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "22", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 100, "direction" : "Inbound" } } ] } 10.14. Creating the control plane machines in Azure You must create the control plane machines in Microsoft Azure for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's Azure Resource Manager (ARM) template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The Ignition content for the control plane nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 10.14.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 10.26. 05_masters.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "masterIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the master nodes" } }, "numberOfMasters" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift masters to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "privateDNSZoneName" : { "type" : "string", "defaultValue" : "", "metadata" : { "description" : "unused" } }, "masterVMSize" : { "type" : "string", "defaultValue" : "Standard_D8s_v3", "metadata" : { "description" : "The size of the Master Virtual Machines" } }, "diskSizeGB" : { "type" : "int", "defaultValue" : 1024, "metadata" : { "description" : "Size of the Master VM OS disk, in GB" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfMasters')]", "input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]" } ] }, "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "copy" : { "name" : "nicCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "copy" : { "name" : "vmCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('masterVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('masterIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "caching": "ReadOnly", "writeAcceleratorEnabled": false, "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : "[parameters('diskSizeGB')]" } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": false } } ] } } } ] } 10.15. Wait for bootstrap completion and remove bootstrap resources in Azure After you create all of the required infrastructure in Microsoft Azure, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 10.16. Creating additional worker machines in Azure You can create worker machines in Microsoft Azure for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's ARM template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The Ignition content for the worker nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 10.16.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 10.27. 06_workers.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "workerIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the worker nodes" } }, "numberOfNodes" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift compute nodes to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "nodeVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the each Node Virtual Machine" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "nodeSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]", "nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]", "infraLoadBalancerName" : "[parameters('baseName')]", "sshKeyPath" : "/home/capi/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfNodes')]", "input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]" } ] }, "resources" : [ { "apiVersion" : "2019-05-01", "name" : "[concat('node', copyIndex())]", "type" : "Microsoft.Resources/deployments", "copy" : { "name" : "nodeCopy", "count" : "[length(variables('vmNames'))]" }, "properties" : { "mode" : "Incremental", "template" : { "USDschema" : "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('nodeSubnetRef')]" } } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "tags" : { "kubernetes.io-cluster-ffranzupi": "owned" }, "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('nodeVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "capi", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('workerIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB": 128 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": true } } ] } } } ] } } } ] } 10.17. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 10.18. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 10.19. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.20. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the public DNS zone. If you are adding this cluster to a new public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 Add a *.apps record to the private DNS zone: Create a *.apps record by using the following command: USD az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300 Add the *.apps record to the private DNS zone by using the following command: USD az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 10.21. Completing an Azure installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 10.22. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service | [
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"\"plan\" : { \"name\": \"rh-ocp-worker\", \"product\": \"rh-ocp-worker\", \"publisher\": \"redhat\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"storageProfile\": { \"imageReference\": { \"offer\": \"rh-ocp-worker\", \"publisher\": \"redhat\", \"sku\": \"rh-ocp-worker\", \"version\": \"413.92.2023101700\" } } }",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity",
"export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`",
"export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role <custom_role> \\ 1 --scope \"USD{RESOURCE_GROUP_ID}\"",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.<architecture>.\"rhel-coreos-extensions\".\"azure-disk\".url'`",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters storageAccount=\"USD{CLUSTER_NAME}sa\" \\ 3 --parameters architecture=\"<architecture>\" 4",
"{ \"USDschema\": \"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\", \"contentVersion\": \"1.0.0.0\", \"parameters\": { \"architecture\": { \"type\": \"string\", \"metadata\": { \"description\": \"The architecture of the Virtual Machines\" }, \"defaultValue\": \"x64\", \"allowedValues\": [ \"Arm64\", \"x64\" ] }, \"baseName\": { \"type\": \"string\", \"minLength\": 1, \"metadata\": { \"description\": \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"storageAccount\": { \"type\": \"string\", \"metadata\": { \"description\": \"The Storage Account name\" } }, \"vhdBlobURL\": { \"type\": \"string\", \"metadata\": { \"description\": \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\": { \"location\": \"[resourceGroup().location]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\": \"[parameters('baseName')]\", \"imageNameGen2\": \"[concat(parameters('baseName'), '-gen2')]\", \"imageRelease\": \"1.0.0\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"Microsoft.Compute/galleries\", \"name\": \"[variables('galleryName')]\", \"location\": \"[variables('location')]\", \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageName')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V1\", \"identifier\": { \"offer\": \"rhcos\", \"publisher\": \"RedHat\", \"sku\": \"basic\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageName')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] }, { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageNameGen2')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V2\", \"identifier\": { \"offer\": \"rhcos-gen2\", \"publisher\": \"RedHat-gen2\", \"sku\": \"gen2\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageNameGen2')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] } ] } ] }",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip-v4\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"[variables('masterLoadBalancerName')]\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip-v4')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"clusterNsgName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"defaultValue\" : \"\", \"metadata\" : { \"description\" : \"unused\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300",
"az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_azure/installing-restricted-networks-azure-user-provisioned |
Chapter 13. Managing ISO Images | Chapter 13. Managing ISO Images You can use Satellite to store ISO images, either from Red Hat's Content Delivery Network or other sources. You can also upload other files, such as virtual machine images, and publish them in repositories. 13.1. Importing ISO Images from Red Hat The Red Hat Content Delivery Network provides ISO images for certain products. The procedure for importing this content is similar to the procedure for enabling repositories for RPM content. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . In the Search field, enter an image name, for example, Red Hat Enterprise Linux 7 Server (ISOs) . In the Available Repositories window, expand Red Hat Enterprise Linux 7 Server (ISOs) . For the x86_64 7.2 entry, click the Enable icon to enable the repositories for the image. In the Satellite web UI, navigate to Content > Products and click Red Hat Enterprise Linux Server . Click the Repositories tab of the Red Hat Enterprise Linux Server window, and click Red Hat Enterprise Linux 7 Server ISOs x86_64 7.2 . In the upper right of the Red Hat Enterprise Linux 7 Server ISOs x86_64 7.2 window, click Select Action and select Sync Now . To view the Synchronization Status In the Satellite web UI, navigate to Content > Sync Status and expand Red Hat Enterprise Linux Server . CLI procedure Locate the Red Hat Enterprise Linux Server product for file repositories: Enable the file repository for Red Hat Enterprise Linux 7.2 Server ISO: Locate the repository in the product: Synchronize the repository in the product: 13.2. Importing Individual ISO Images and Files Use this procedure to manually import ISO content and other files to Satellite Server. To import files, you can complete the following steps in the Satellite web UI or using the Hammer CLI. However, if the size of the file that you want to upload is larger than 15 MB, you must use the Hammer CLI to upload it to a repository. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Products , and in the Products window, click Create Product . In the Name field, enter a name to identify the product. This name populates the Label field. Optional: In the GPG Key field, enter a GPG Key for the product. Optional: From the Sync Plan list, select a synchronization plan for the product. Optional: In the Description field, enter a description of the product. Click Save . In the Products window, click the new product and then click Create Repository . In the Name field, enter a name for the repository. This automatically populates the Label field. From the Type list, select file . In the Upstream URL field, enter the URL of the registry to use as a source. Add a corresponding user name and password in the Upstream Username and Upstream Password fields. Click Save . Select the new repository. Navigate to Upload File and click Browse . Select the .iso file and click Upload . CLI procedure Create the custom product: Create the repository: Upload the ISO file to the repository: | [
"hammer repository-set list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \" | grep \"file\"",
"hammer repository-set enable --product \"Red Hat Enterprise Linux Server\" --name \"Red Hat Enterprise Linux 7 Server (ISOs)\" --releasever 7.2 --basearch x86_64 --organization \" My_Organization \"",
"hammer repository list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"",
"hammer repository synchronize --name \"Red Hat Enterprise Linux 7 Server ISOs x86_64 7.2\" --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"",
"hammer product create --name \" My_ISOs \" --sync-plan \"Example Plan\" --description \" My_Product \" --organization \" My_Organization \"",
"hammer repository create --name \" My_ISOs \" --content-type \"file\" --product \" My_Product \" --organization \" My_Organization \"",
"hammer repository upload-content --path ~/bootdisk.iso --id repo_ID --organization \" My_Organization \""
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/managing_iso_images_content-management |
Chapter 1. Introduction to hardening Ansible Automation Platform | Chapter 1. Introduction to hardening Ansible Automation Platform This document provides guidance for improving the security posture (referred to as "hardening" throughout this guide) of your Red Hat Ansible Automation Platform deployment on Red Hat Enterprise Linux. Other deployment targets, such as OpenShift, are not currently within the scope of this guide. Ansible Automation Platform managed services available through cloud service provider marketplaces are also not within the scope of this guide. This guide takes a practical approach to hardening the Ansible Automation Platform security posture, starting with the planning and architecture phase of deployment and then covering specific guidance for installation, initial configuration, and day two operations. As this guide specifically covers Ansible Automation Platform running on Red Hat Enterprise Linux, hardening guidance for Red Hat Enterprise Linux will be covered where it affects the automation platform components. Additional considerations with regards to the Defense Information Systems Agency (DISA) Security Technical Implementation Guides (STIGs) are provided for those organizations that integrate the DISA STIG as a part of their overall security strategy. Note These recommendations do not guarantee security or compliance of your deployment of Ansible Automation Platform. You must assess security from the unique requirements of your organization to address specific threats and risks and balance these against implementation factors. 1.1. Audience This guide is written for personnel responsible for installing, configuring, and maintaining Ansible Automation Platform 2.4 when deployed on Red Hat Enterprise Linux. Additional information is provided for security operations, compliance assessment, and other functions associated with related security processes. 1.2. Overview of Ansible Automation Platform Ansible is an open source, command-line IT automation software application written in Python. You can use Ansible Automation Platform to configure systems, deploy software, and orchestrate advanced workflows to support application deployment, system updates, and more. Ansible's main strengths are simplicity and ease of use. It also has a strong focus on security and reliability, featuring minimal moving parts. It uses secure, well-known communication protocols like SSH, HTTPS, and WinRM for transport and uses a human-readable language that is designed for getting started quickly without extensive training. Ansible Automation Platform enhances the Ansible language with enterprise-class features, such as Role-Based Access Controls (RBAC), centralized logging and auditing, credential management, job scheduling, and complex automation workflows. With Ansible Automation Platform you get certified content from our robust partner ecosystem; added security, reporting, and analytics; and life cycle technical support to scale automation across your organization. Ansible Automation Platform simplifies the development and operation of automation workloads for managing enterprise application infrastructure life cycles. It works across multiple IT domains including operations, networking, security, and development, as well as across diverse hybrid environments. 1.2.1. Ansible Automation Platform components Ansible Automation Platform is a modular platform that includes automation controller, automation hub, Event-Driven Ansible controller, and Insights for Ansible Automation Platform. Additional resources For more information about the components provided within Ansible Automation Platform, see Red Hat Ansible Automation Platform components in the Red Hat Ansible Automation Platform Planning Guide . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_hardening_guide/assembly-intro-to-aap-hardening |
B.6. Guest virtual machine booting stalls with error: No boot device | B.6. Guest virtual machine booting stalls with error: No boot device Symptom After building a guest virtual machine from an existing disk image, the guest booting stalls with the error message No boot device . However, the guest virtual machine can start successfully using the QEMU command directly. Investigation The disk's bus type is not specified in the command for importing the existing disk image: However, the command line used to boot up the guest virtual machine using QEMU directly shows that it uses virtio for its bus type: Note the bus= in the guest's XML generated by libvirt for the imported guest: <domain type='qemu'> <name>rhel_64</name> <uuid>6cd34d52-59e3-5a42-29e4-1d173759f3e7</uuid> <memory>2097152</memory> <currentMemory>2097152</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='rhel5.4.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'> <timer name='pit' tickpolicy='delay'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/root/RHEL-Server-5.8-64-virtio.qcow2'/> <emphasis role="bold"><target dev='hda' bus='ide'/></emphasis> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <controller type='ide' index='0'/> <interface type='bridge'> <mac address='54:52:00:08:3e:8c'/> <source bridge='br0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> <video> <model type='cirrus' vram='9216' heads='1'/> </video> </devices> </domain> The bus type for the disk is set as ide , which is the default value set by libvirt . This is the incorrect bus type, and has caused the unsuccessful boot for the imported guest. Solution Procedure B.2. Correcting the disk bus type Undefine the imported guest, then re-import it with bus=virtio and the following: Edit the imported guest's XML using virsh edit and correct the disk bus type. | [
"virt-install --connect qemu:///system --ram 2048 -n rhel_64 --os-type=linux --os-variant=rhel5 --disk path=/root/RHEL-Server-5.8-64-virtio.qcow2,device=disk,format=qcow2 --vcpus=2 --graphics spice --noautoconsole --import",
"ps -ef | grep qemu /usr/libexec/qemu-kvm -monitor stdio -drive file=/root/RHEL-Server-5.8-32-virtio.qcow2,index=0, if=virtio ,media=disk,cache=none,format=qcow2 -net nic,vlan=0,model=rtl8139,macaddr=00:30:91:aa:04:74 -net tap,vlan=0,script=/etc/qemu-ifup,downscript=no -m 2048 -smp 2,cores=1,threads=1,sockets=2 -cpu qemu64,+sse2 -soundhw ac97 -rtc-td-hack -M rhel5.6.0 -usbdevice tablet -vnc :10 -boot c -no-kvm-pit-reinjection",
"<domain type='qemu'> <name>rhel_64</name> <uuid>6cd34d52-59e3-5a42-29e4-1d173759f3e7</uuid> <memory>2097152</memory> <currentMemory>2097152</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='rhel5.4.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'> <timer name='pit' tickpolicy='delay'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/root/RHEL-Server-5.8-64-virtio.qcow2'/> <emphasis role=\"bold\"><target dev='hda' bus='ide'/></emphasis> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <controller type='ide' index='0'/> <interface type='bridge'> <mac address='54:52:00:08:3e:8c'/> <source bridge='br0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target port='0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> <video> <model type='cirrus' vram='9216' heads='1'/> </video> </devices> </domain>",
"virsh destroy rhel_64 virsh undefine rhel_64 virt-install --connect qemu:///system --ram 1024 -n rhel_64 -r 2048 --os-type=linux --os-variant=rhel5 --disk path=/root/RHEL-Server-5.8-64-virtio.qcow2,device=disk, bus=virtio ,format=qcow2 --vcpus=2 --graphics spice --noautoconsole --import"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/app_domain_not_booting |
Chapter 4. Desktop | Chapter 4. Desktop Multiple mount changes no longer cause performance drop for clients of the GUnixMountMonitor object Previously, when the autofs program initiated multiple mount changes in a short period of time, services using the GUnixMountMonitor object caused a high CPU load. This update makes it possible to skip accumulated file change events of the /proc/mounts file that cannot be handled in real-time. As a result, the CPU load for the clients of GUnixMountMonitor is lower. (BZ#1154183) xfreerdp client now works correctly on systems with enabled FIPS mode Previously, when the xfreerdp client was used on systems with enabled FIPS mode, it exited unexpectedly due to usage of FIPS non-compliant encryption algorithms. This update ensures that xfreerdp does not exit unexpectedly when it is used with FIPS mode enabled and that FIPS security encryption method is negotiated. As a result, xfreerdp now works correctly with the RDP and TLS security protocols on systems with enabled FIPS mode. However, an error now occurs if the Network Level Authentication (NLA) protocol is required, because its implementation requires FIPS non-compliant algorithms. (BZ# 1347920 ) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/bug_fixes_desktop |
Chapter 123. KafkaMirrorMakerStatus schema reference | Chapter 123. KafkaMirrorMakerStatus schema reference Used in: KafkaMirrorMaker Property Property type Description conditions Condition array List of status conditions. observedGeneration integer The generation of the CRD that was last reconciled by the operator. labelSelector string Label selector for pods providing this resource. replicas integer The current number of pods being used to provide this resource. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkamirrormakerstatus-reference |
Chapter 13. Understanding low latency tuning for cluster nodes | Chapter 13. Understanding low latency tuning for cluster nodes Edge computing has a key role in reducing latency and congestion problems and improving application performance for telco and 5G network applications. Maintaining a network architecture with the lowest possible latency is key for meeting the network performance requirements of 5G. Compared to 4G technology, with an average latency of 50 ms, 5G is targeted to reach latency of 1 ms or less. This reduction in latency boosts wireless throughput by a factor of 10. 13.1. About low latency Many of the deployed applications in the Telco space require low latency that can only tolerate zero packet loss. Tuning for zero packet loss helps mitigate the inherent issues that degrade network performance. For more information, see Tuning for Zero Packet Loss in Red Hat OpenStack Platform (RHOSP) . The Edge computing initiative also comes in to play for reducing latency rates. Think of it as being on the edge of the cloud and closer to the user. This greatly reduces the distance between the user and distant data centers, resulting in reduced application response times and performance latency. Administrators must be able to manage their many Edge sites and local services in a centralized way so that all of the deployments can run at the lowest possible management cost. They also need an easy way to deploy and configure certain nodes of their cluster for real-time low latency and high-performance purposes. Low latency nodes are useful for applications such as Cloud-native Network Functions (CNF) and Data Plane Development Kit (DPDK). OpenShift Container Platform currently provides mechanisms to tune software on an OpenShift Container Platform cluster for real-time running and low latency (around <20 microseconds reaction time). This includes tuning the kernel and OpenShift Container Platform set values, installing a kernel, and reconfiguring the machine. But this method requires setting up four different Operators and performing many configurations that, when done manually, is complex and could be prone to mistakes. OpenShift Container Platform uses the Node Tuning Operator to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator uses this performance profile configuration that makes it easier to make these changes in a more reliable way. The administrator can specify whether to update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolate CPUs for application containers to run the workloads. OpenShift Container Platform also supports workload hints for the Node Tuning Operator that can tune the PerformanceProfile to meet the demands of different industry environments. Workload hints are available for highPowerConsumption (very low latency at the cost of increased power consumption) and realTime (priority given to optimum latency). A combination of true/false settings for these hints can be used to deal with application-specific workload profiles and requirements. Workload hints simplify the fine-tuning of performance to industry sector settings. Instead of a "one size fits all" approach, workload hints can cater to usage patterns such as placing priority on: Low latency Real-time capability Efficient use of power Ideally, all of the previously listed items are prioritized. Some of these items come at the expense of others however. The Node Tuning Operator is now aware of the workload expectations and better able to meet the demands of the workload. The cluster admin can now specify into which use case that workload falls. The Node Tuning Operator uses the PerformanceProfile to fine tune the performance settings for the workload. The environment in which an application is operating influences its behavior. For a typical data center with no strict latency requirements, only minimal default tuning is needed that enables CPU partitioning for some high performance workload pods. For data centers and workloads where latency is a higher priority, measures are still taken to optimize power consumption. The most complicated cases are clusters close to latency-sensitive equipment such as manufacturing machinery and software-defined radios. This last class of deployment is often referred to as Far edge. For Far edge deployments, ultra-low latency is the ultimate priority, and is achieved at the expense of power management. 13.2. About Hyper-Threading for low latency and real-time applications Hyper-Threading is an Intel processor technology that allows a physical CPU processor core to function as two logical cores, executing two independent threads simultaneously. Hyper-Threading allows for better system throughput for certain workload types where parallel processing is beneficial. The default OpenShift Container Platform configuration expects Hyper-Threading to be enabled. For telecommunications applications, it is important to design your application infrastructure to minimize latency as much as possible. Hyper-Threading can slow performance times and negatively affect throughput for compute-intensive workloads that require low latency. Disabling Hyper-Threading ensures predictable performance and can decrease processing times for these workloads. Note Hyper-Threading implementation and configuration differs depending on the hardware you are running OpenShift Container Platform on. Consult the relevant host hardware tuning information for more details of the Hyper-Threading implementation specific to that hardware. Disabling Hyper-Threading can increase the cost per core of the cluster. Additional resources Configuring Hyper-Threading for a cluster | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/scalability_and_performance/cnf-understanding-low-latency |
Chapter 8. Basic Red Hat Ceph Storage client setup | Chapter 8. Basic Red Hat Ceph Storage client setup As a storage administrator, you have to set up client machines with basic configuration to interact with the storage cluster. Most client machines only need the ceph-common package and its dependencies installed. It will supply the basic ceph and rados commands, as well as other commands like mount.ceph and rbd . 8.1. Configuring file setup on client machines Client machines generally need a smaller configuration file than a full-fledged storage cluster member. You can generate a minimal configuration file which can give details to clients to reach the Ceph monitors. Prerequisites A running Red Hat Ceph Storage cluster. Root access to the nodes. Procedure On the node where you want to set up the files, create a directory ceph in the /etc folder: Example Navigate to /etc/ceph directory: Example Generate the configuration file in the ceph directory: Example The contents of this file should be installed in /etc/ceph/ceph.conf path. You can use this configuration file to reach the Ceph monitors. 8.2. Setting-up keyring on client machines Most Ceph clusters are run with the authentication enabled, and the client needs the keys in order to communicate with cluster machines. You can generate the keyring which can give details to clients to reach the Ceph monitors. Prerequisites A running Red Hat Ceph Storage cluster. Root access to the nodes. Procedure On the node where you want to set up the keyring, create a directory ceph in the /etc folder: Example Navigate to /etc/ceph directory in the ceph directory: Example Generate the keyring for the client: Syntax Example Verify the output in the ceph.keyring file: Example The resulting output should be put into a keyring file, for example /etc/ceph/ceph.keyring . | [
"mkdir /etc/ceph/",
"cd /etc/ceph/",
"ceph config generate-minimal-conf minimal ceph.conf for 417b1d7a-a0e6-11eb-b940-001a4a000740 [global] fsid = 417b1d7a-a0e6-11eb-b940-001a4a000740 mon_host = [v2:10.74.249.41:3300/0,v1:10.74.249.41:6789/0]",
"mkdir /etc/ceph/",
"cd /etc/ceph/",
"ceph auth get-or-create client. CLIENT_NAME -o /etc/ceph/ NAME_OF_THE_FILE",
"ceph auth get-or-create client.fs -o /etc/ceph/ceph.keyring",
"cat ceph.keyring [client.fs] key = AQAvoH5gkUCsExAATz3xCBLd4n6B6jRv+Z7CVQ=="
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/operations_guide/basic-red-hat-ceph-storage-client-setup |
Chapter 4. Managing Load-balancing service instance logs | Chapter 4. Managing Load-balancing service instance logs Load-balancing service (octavia) instances (amphorae) generate administrative logs and tenant flow logs. The amphorae offload these logs to central locations on syslog receivers using a set of containers or to other syslog receivers at endpoints that you can choose. This log offload feature enables administrators to go to one location for logs, and to retain logs when the amphorae are rotated. Even though log offloading is enabled by default, amphorae still continue to write administrative and tenant flow logs to the disk inside the amphorae. You can, however, disable logging locally if you choose. When you use the TCP syslog protocol, you can specify one or more secondary endpoints for administrative and tenant log offloading in the event that the primary endpoint fails. You can control a range of other logging features such as setting the syslog facility value, changing the tenant flow log format, and widening the scope of administrative logging to include logs from sources like the kernel and from cron . Section 4.1, "Configuration parameters for Load-balancing service instance logging" Section 4.2, "Load-balancing service instance tenant flow log format" Section 4.3, "Disabling Load-balancing service instance tenant flow logging" Section 4.4, "Disabling Load-balancing service instance local log storage" 4.1. Configuration parameters for Load-balancing service instance logging To modify the Load-balancing service (octavia) instance (amphora) logging configuration, set values for one or more configuration parameters that control logging and apply the OpenStackControlPlane custom resource (CR) for the Load-balancing service. These configuration parameters for amphora logging enable you to control features such as turning off log offloading, defining custom endpoints to offload logs to, setting the syslog facility value for logs, and so on. The octavia Operator automatically enables log offloading. Global logging parameters To set the configuration parameters for all logs, you must add a specific section to OpenStackControlerPlane CR for each of the octavia services: housekeeping, health manager, and worker. Add the configuration parameters for all logs underneath the customServiceConfig.[amphora_agent] parameters. Usage example disable_local_log_storage=false When true , instances do not store logs on the instance host filesystem. This includes all kernel, system, and security logs. Default: false . forward_all_logs=true When true , instances forward all log messages to the administrative log endpoints, including non-load balancing related logs such as the cron and kernel logs. Default: true . Administrative logging parameters To set the configuration parameters for administrative logging, you must add a specific section to OpenStackControlerPlane CR for each of the octavia services: housekeeping, health manager, and worker. With the exception of adminLogTargets , you add the configuration parameters for administrative logging underneath the customServiceConfig.[amphora_agent] parameters. Usage example adminLogTargets A list of objects describing syslog endpoints to receive administrative log messages: host : <host> port : <port> protocol : <protocol> An endpoint can be a container, VM, or physical host that is running a process that is listening for the log messages on the specified port. Default: The default value is automatically set by the octavia Operator. You add adminLogTargets underneath the octaviaRsyslog parameter. administrative_log_facility=<number> A number between 0 and 7 that is the syslog LOG_LOCAL facility to use for the administrative log messages. Default: 1 . Tenant flow logging parameters To set the configuration parameters for tenant flow logging, you must add a specific section to OpenStackControlerPlane CR for each of the octavia services: housekeeping, health manager, and worker. With the exception of tenantLogTargets , you add the configuration parameters for tenant flow logging underneath the customServiceConfig.[amphora_agent] parameters. For an example of how to set these parameters, see Section 4.3, "Disabling Load-balancing service instance tenant flow logging" . Usage example connection_login=true | false When true , tenant connection flows are logged. Default: true . tenantLogTargets A list of objects describing syslog endpoints to receive tenant traffic flow log messages: host : <host> port : <port> protocol : <protocol> These endpoints can be a container, VM, or physical host that is running a process that is listening for the log messages on the specified port. Default: The default value is automatically set by the octavia Operator. You add tenantLogTargets underneath the octaviaRsyslog parameter. user_log_facility=<number> A number between 0 and 7 that is the syslog "LOG_LOCAL" facility to use for the tenant traffic flow log messages. Default: 0 . user_log_format="<value>" The format for the tenant traffic flow log. Default: "{{ '{{' }} project_id {{ '}}' }} {{ '{{' }} lb_id {{ '}}' }} %f %ci %cp %t %{+Q}r %ST %B %U %[ssl_c_verify] %{+Q}[ssl_c_s_dn] %b %s %Tt %tsc" . The alphanumerics represent specific octavia fields, and the curly braces ({}) are substitution variables. 4.2. Load-balancing service instance tenant flow log format Tenant flow logs for Load-balancing service instances (amphorae) use the HAProxy log format. The two exceptions are the project_id and lb_id variables whose values are provided by the amphora provider driver. Example Here is an example log entry with rsyslog as the syslog receiver: Jun 12 00:44:13 amphora-3e0239c3-5496-4215-b76c-6abbe18de573 haproxy[1644]: 5408b89aa45b48c69a53dca1aaec58db fd8f23df-960b-4b12-ba62-2b1dff661ee7 261ecfc2-9e8e-4bba-9ec2-3c903459a895 172.24.4.1 41152 12/Jun/2019:00:44:13.030 "GET / HTTP/1.1" 200 76 73 - "" e37e0e04-68a3-435b-876c-cffe4f2138a4 6f2720b3-27dc-4496-9039-1aafe2fee105 4 -- Notes A hyphen (-) indicates any field that is unknown or not applicable to the connection. The prefix in the earlier sample log entry originates from the rsyslog receiver, and is not part of the syslog message from the amphora: Jun 12 00:44:13 amphora-3e0239c3-5496-4215-b76c-6abbe18de573 haproxy[1644]:" Default The default amphora tenant flow log format is: `"{{ '{{' }} project_id {{ '}}' }} {{ '{{' }} lb_id {{ '}}' }} %f %ci %cp %t %{+Q}r %ST %B %U %[ssl_c_verify] %{+Q}[ssl_c_s_dn] %b %s %Tt %tsc"` The following table describes the log file format details. Table 4.1. Data variables for tenant flow logs format variable definitions. Variable Type Field name {{project_id}} UUID Project ID (substitution variable from the amphora provider driver) {{lb_id}} UUID Load balancer ID (substitution variable from the amphora provider driver) %f string frontend_name %ci IP address client_ip %cp numeric client_port %t date date_time %ST numeric status_code %B numeric bytes_read %U numeric bytes_uploaded %ssl_c_verify Boolean client_certificate_verify (0 or 1) %ssl_c_s_dn string client_certificate_distinguised_name %b string pool_id %s string member_id %Tt numeric processing_time (milliseconds) %tsc string termination_state (with cookie status) Additional resources Custom log format in HAProxy Documentation 4.3. Disabling Load-balancing service instance tenant flow logging Tenant flow log offloading for Load-balancing service instances (amphorae) is enabled by default. To disable tenant flow logging without disabling administrative log offloading, you must override the [amphora_agent].tenant_log_targets` in the customServiceConfig` field of each Load-balancing service component in the OpenstackControlPlane custom resource (CR) file. When the OctaviaConnectionLogging parameter is false , the amphorae do not write tenant flow logs to the disk inside the amphorae, nor offload any logs to syslog receivers listening elsewhere. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml , on your workstation. Add the following configuration to the octavia service configuration: Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace: The control plane is deployed when all the pods are either completed or running. 4.4. Disabling Load-balancing service instance local log storage Even when you configure Load-balancing service instances (amphorae) to offload administrative and tenant flow logs, the amphorae continue to write these logs to the disk inside the amphorae. To improve the performance of the load balancer, you can stop logging locally. Important If you disable logging locally, you also disable all log storage in the amphora, including kernel, system, and security logging. Note If you disable local log storage and the OctaviaLogOffload parameter is set to false, ensure that you set OctaviaConnectionLogging to false for improved load balancing performance. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Open your OpenStackControlPlane` custom resource (CR) file, openstack_control_plane.yaml , on your workstation. Add the following configuration to the octavia service configuration: Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Confirm that the control plane is deployed by reviewing the pods in the openstack namespace: The control plane is deployed when all the pods are either completed or running. | [
"octavia: template: octaviaHousekeeping: customServiceConfig: | [amphora_agent] <log configuration parameters go here> octaviaHealthManager: customServiceConfig: | [amphora_agent] <log configuration parameters go here> octaviaWorker: customServiceConfig: | [amphora_agent] <log configuration parameters go here>",
"octavia: template: octaviaRsyslog: adminLogTargets: - host: 192.168.1.1 port: 1514 protocol: udp octaviaHousekeeping: customServiceConfig: | [amphora_agent] <administrative logging parameters go here> octaviaHealthManager: customServiceConfig: | [amphora_agent] <administrative logging parameters go here> octaviaWorker: customServiceConfig: | [amphora_agent] <administrative logging parameters go here>",
"octavia: template: octaviaRsyslog tenantLogTargets: - host: 192.168.1.1 port: 1514 protocol: udp octaviaHousekeeping: customServiceConfig: | [amphora_agent] <tenant flow logging parameters go here> [haproxy_amphora] connection_login=true octaviaHealthManager: customServiceConfig: | [amphora_agent] <tenant flow logging parameters go here> [haproxy_amphora] connection_login=true octaviaWorker: customServiceConfig: | [amphora_agent] <tenant flow logging go here> [haproxy_amphora] connection_login=true",
"Jun 12 00:44:13 amphora-3e0239c3-5496-4215-b76c-6abbe18de573 haproxy[1644]: 5408b89aa45b48c69a53dca1aaec58db fd8f23df-960b-4b12-ba62-2b1dff661ee7 261ecfc2-9e8e-4bba-9ec2-3c903459a895 172.24.4.1 41152 12/Jun/2019:00:44:13.030 \"GET / HTTP/1.1\" 200 76 73 - \"\" e37e0e04-68a3-435b-876c-cffe4f2138a4 6f2720b3-27dc-4496-9039-1aafe2fee105 4 --",
"Jun 12 00:44:13 amphora-3e0239c3-5496-4215-b76c-6abbe18de573 haproxy[1644]:\"",
"`\"{{ '{{' }} project_id {{ '}}' }} {{ '{{' }} lb_id {{ '}}' }} %f %ci %cp %t %{+Q}r %ST %B %U %[ssl_c_verify] %{+Q}[ssl_c_s_dn] %b %s %Tt %tsc\"`",
"octavia: template: octaviaHousekeeping: customServiceConfig: | [amphora_agent] tenant_log_targets = octaviaHealthManager: customServiceConfig: | [amphora_agent] tenant_log_targets = octaviaWorker: customServiceConfig: | [amphora_agent] tenant_log_targets =",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started",
"oc get pods -n openstack",
"octavia: template: octaviaHousekeeping: customServiceConfig: | [amphora_agent] disable_local_log_storage=true octaviaHealthManager: customServiceConfig: | [amphora_agent] disable_local_log_storage=true octaviaWorker: customServiceConfig: | [amphora_agent] disable_local_log_storage=true",
"oc apply -f openstack_control_plane.yaml -n openstack",
"oc get openstackcontrolplane -n openstack NAME STATUS MESSAGE openstack-control-plane Unknown Setup started",
"oc get pods -n openstack"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_load_balancing_as_a_service/manage-lb-service-instance-logs_rhoso-lbaas |
Chapter 9. Installing a private cluster on GCP | Chapter 9. Installing a private cluster on GCP In OpenShift Container Platform version 4.18, you can install a private cluster into an existing VPC on Google Cloud Platform (GCP). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 9.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 9.2.1. Private clusters in GCP To create a private cluster on Google Cloud Platform (GCP), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the GCP APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public ingress A public DNS zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private DNS zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. Because it is not possible to limit access to external load balancers based on source tags, the private cluster uses only internal load balancers to allow access to internal instances. The internal load balancer relies on instance groups rather than the target pools that the network load balancers use. The installation program creates instance groups for each zone, even if there is no instance in that group. The cluster IP address is internal only. One forwarding rule manages both the Kubernetes API and machine config server ports. The backend service is comprised of each zone's instance group and, while it exists, the bootstrap instance group. The firewall uses a single rule that is based on only internal source ranges. 9.2.1.1. Limitations No health check for the Machine config server, /healthz , runs because of a difference in load balancer functionality. Two internal load balancers cannot share a single IP address, but two network load balancers can share a single external IP address. Instead, the health of an instance is determined entirely by the /readyz check on port 6443. 9.3. About using a custom VPC In OpenShift Container Platform 4.18, you can deploy a cluster into an existing VPC in Google Cloud Platform (GCP). If you do, you must also use existing subnets within the VPC and routing rules. By deploying OpenShift Container Platform into an existing GCP VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself. 9.3.1. Requirements for using your VPC The installation program will no longer create the following components: VPC Subnets Cloud router Cloud NAT NAT IP addresses If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VPC options like DHCP, so you must do so before you install the cluster. Your VPC and subnets must meet the following characteristics: The VPC must be in the same GCP project that you deploy the OpenShift Container Platform cluster to. To allow access to the internet from the control plane and compute machines, you must configure cloud NAT on the subnets to allow egress to it. These machines do not have a public address. Even if you do not require access to the internet, you must allow egress to the VPC network to obtain the installation program and images. Because multiple cloud NATs cannot be configured on the shared subnets, the installation program cannot configure it. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist and belong to the VPC that you specified. The subnet CIDRs belong to the machine CIDR. You must provide a subnet to deploy the cluster control plane and compute machines to. You can use the same subnet for both machine types. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. 9.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or Ingress rules. The GCP credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage, and nodes. 9.3.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is preserved by firewall rules that reference the machines in your cluster by the cluster's infrastructure ID. Only traffic within the cluster is allowed. If you deploy multiple clusters to the same VPC, the following components might share access between clusters: The API, which is globally available with an external publishing strategy or available throughout the network in an internal publishing strategy Debugging tools, such as ports on VM instances that are open to the machine CIDR for SSH and ICMP access 9.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 9.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 9.7. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for GCP 9.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 9.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 9.7.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Note Not all instance types are available in all regions and zones. For a detailed breakdown of which instance types are available in which zones, see regions and zones (Google documentation). Some instance types require the use of Hyperdisk storage. If you use an instance type that requires Hyperdisk storage, all of the nodes in your cluster must support Hyperdisk storage, and you must change the default storage class to use Hyperdisk storage. For more information, see machine series support for Hyperdisk (Google documentation). For instructions on modifying storage classes, see the "GCE PersistentDisk (gcePD) object definition" section in the Dynamic Provisioning page in Storage . Example 9.1. Machine series A2 A3 C2 C2D C3 C3D C4 E2 M1 N1 N2 N2D N4 Tau T2D 9.7.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 9.2. Machine series for 64-bit ARM machines C4A Tau T2A 9.7.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 9.7.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 9.7.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 9.7.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{"auths": ...}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 publish: Internal 27 1 15 17 18 24 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 21 Specify the name of an existing VPC. 22 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 23 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 25 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 26 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 27 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . Additional resources Enabling customer-managed encryption keys for a compute machine set 9.7.8. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 9.7.9. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 9.8. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 9.9. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 9.9.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 9.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 9.9.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 9.9.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the ccoctl utility uses: The IAM Workload Identity Pool Admin role The following granular permissions: compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 9.9.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. If you plan to install the GCP Filestore Container Storage Interface (CSI) Driver Operator, retain this value. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 9.9.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 9.4. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 9.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 9.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 9.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\": ...}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 publish: Internal 27",
"./openshift-install create manifests --dir <installation_directory> 1",
"touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1",
"ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml",
"cluster-ingress-default-ingresscontroller.yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret",
"chmod 775 ccoctl.<rhel_version>",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_gcp/installing-gcp-private |
Chapter 5. Configuring Red Hat Cluster With system-config-cluster | Chapter 5. Configuring Red Hat Cluster With system-config-cluster This chapter describes how to configure Red Hat Cluster software using system-config-cluster , and consists of the following sections: Section 5.1, "Configuration Tasks" Section 5.2, "Starting the Cluster Configuration Tool " Section 5.3, "Configuring Cluster Properties" Section 5.4, "Configuring Fence Devices" Section 5.5, "Adding and Deleting Members" Section 5.6, "Configuring a Failover Domain" Section 5.7, "Adding Cluster Resources" Section 5.8, "Adding a Cluster Service to the Cluster" Section 5.9, "Propagating The Configuration File: New Cluster" Section 5.10, "Starting the Cluster Software" Note While system-config-cluster provides several convenient tools for configuring and managing a Red Hat Cluster, the newer, more comprehensive tool, Conga , provides more convenience and flexibility than system-config-cluster . You may want to consider using Conga instead (refer to Chapter 3, Configuring Red Hat Cluster With Conga and Chapter 4, Managing Red Hat Cluster With Conga ). 5.1. Configuration Tasks Configuring Red Hat Cluster software with system-config-cluster consists of the following steps: Starting the Cluster Configuration Tool , system-config-cluster . Refer to Section 5.2, "Starting the Cluster Configuration Tool " . Configuring cluster properties. Refer to Section 5.3, "Configuring Cluster Properties" . Creating fence devices. Refer to Section 5.4, "Configuring Fence Devices" . Creating cluster members. Refer to Section 5.5, "Adding and Deleting Members" . Creating failover domains. Refer to Section 5.6, "Configuring a Failover Domain" . Creating resources. Refer to Section 5.7, "Adding Cluster Resources" . Creating cluster services. Refer to Section 5.8, "Adding a Cluster Service to the Cluster" . Propagating the configuration file to the other nodes in the cluster. Refer to Section 5.9, "Propagating The Configuration File: New Cluster" . Starting the cluster software. Refer to Section 5.10, "Starting the Cluster Software" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/ch-config-scc-CA |
13.4. Disabling User Private Groups | 13.4. Disabling User Private Groups To ensure that IdM does not create a default user private group for a new user, choose one of the following: Section 13.4.1, "Creating a User without a User Private Group" Section 13.4.2, "Disabling User Private Groups Globally for All Users" Even after you disable creating default user private groups, IdM will still require a GID when adding new users. To ensure that adding the user succeeds, see Section 13.4.3, "Adding a User with User Private Groups Disabled" . Note If you want to disable creating default user private groups because of GID conflicts, consider changing the default UID and GID assignment ranges. See Chapter 14, Unique UID and GID Number Assignments . 13.4.1. Creating a User without a User Private Group Add the --noprivate option to the ipa user-add command. Note that for the command to succeed, you must specify a custom private group. See Section 13.4.3, "Adding a User with User Private Groups Disabled" . 13.4.2. Disabling User Private Groups Globally for All Users Log in as the administrator: IdM uses the Directory Server Managed Entries Plug-in to manage user private groups. List the instances of the plug-in: To ensure IdM does not create user private groups, disable the plug-in instance responsible for managing user private groups: Note To re-enable the UPG Definition instance later, use the ipa-managed-entries -e "UPG Definition" enable command. Restart Directory Server to load the new configuration. 13.4.3. Adding a User with User Private Groups Disabled To make sure adding a new user succeeds when creating default user private groups is disabled, choose one of the following: Specify a custom GID when adding a new user. The GID does not have to correspond to an already existing user group. For example, when adding a user from the command line, add the --gid option to the ipa user-add command. Use an automember rule to add the user to an existing group with a GID. See Section 13.6, "Defining Automatic Group Membership for Users and Hosts" . | [
"kinit admin",
"ipa-managed-entries --list",
"ipa-managed-entries -e \"UPG Definition\" disable Disabling Plugin",
"systemctl restart dirsrv.target"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/user-private-groups |
Part IV. Monitoring and Performance | Part IV. Monitoring and Performance | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/part-monitoring_and_performance |
Chapter 8. Booting hosts with the discovery image | Chapter 8. Booting hosts with the discovery image The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can boot hosts with the discovery image using three methods: USB drive Redfish virtual media iPXE 8.1. Creating an ISO image on a USB drive You can install the Assisted Installer agent using a USB drive that contains the discovery ISO image. Starting the host with the USB drive prepares the host for the software installation. Procedure On the administration host, insert a USB drive into a USB port. Copy the ISO image to the USB drive, for example: # dd if=<path_to_iso> of=<path_to_usb> status=progress where: <path_to_iso> is the relative path to the downloaded discovery ISO file, for example, discovery.iso . <path_to_usb> is the location of the connected USB drive, for example, /dev/sdb . After the ISO is copied to the USB drive, you can use the USB drive to install the Assisted Installer agent on the cluster host. 8.2. Booting with a USB drive To register nodes with the Assisted Installer using a bootable USB drive, use the following procedure. Procedure Insert the RHCOS discovery ISO USB drive into the target host. Configure the boot drive order in the server firmware settings to boot from the attached discovery ISO, and then reboot the server. Wait for the host to boot up. For UI installations, on the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts. For API installations, refresh the token, check the enabled host count, and gather the host IDs: USD source refresh-token USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.enabled_host_count' USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.host_networks[].host_ids' Example output [ "1062663e-7989-8b2d-7fbb-e6f4d5bb28e5" ] 8.3. Booting from an HTTP-hosted ISO image using the Redfish API You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API. Prerequisites Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO. Procedure Copy the ISO file to an HTTP server accessible in your network. Boot the host from the hosted ISO file, for example: Call the redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command: USD curl -k -u <bmc_username>:<bmc_password> \ -d '{"Image":"<hosted_iso_file>", "Inserted": true}' \ -H "Content-Type: application/json" \ -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia Where: <bmc_username>:<bmc_password> Is the username and password for the target host BMC. <hosted_iso_file> Is the URL for the hosted installation ISO, for example: http://webserver.example.com/rhcos-live-minimal.iso . The ISO must be accessible from the target host machine. <host_bmc_address> Is the BMC IP address of the target host machine. Set the host to boot from the VirtualMedia device by running the following command: USD curl -k -u <bmc_username>:<bmc_password> \ -X PATCH -H 'Content-Type: application/json' \ -d '{"Boot": {"BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI", "BootSourceOverrideEnabled": "Once"}}' \ <host_bmc_address>/redfish/v1/Systems/System.Embedded.1 Reboot the host: USD curl -k -u <bmc_username>:<bmc_password> \ -d '{"ResetType": "ForceRestart"}' \ -H 'Content-type: application/json' \ -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command: USD curl -k -u <bmc_username>:<bmc_password> \ -d '{"ResetType": "On"}' -H 'Content-type: application/json' \ -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset 8.4. Booting hosts using iPXE The Assisted Installer provides an iPXE script including all the artifacts needed to boot the discovery image for an infrastructure environment. Due to the limitations of the current HTTPS implementation of iPXE, the recommendation is to download and expose the needed artifacts in an HTTP server. Currently, even if iPXE supports HTTPS protocol, the supported algorithms are old and not recommended. The full list of supported ciphers is in https://ipxe.org/crypto . Prerequisites You have created an infrastructure environment by using the API or you have created a cluster by using the UI. You have your infrastructure environment ID exported in your shell as USDINFRA_ENV_ID . You have credentials to use when accessing the API and have exported a token as USDAPI_TOKEN in your shell. You have an HTTP server to host the images. Note When configuring via the UI, the USDINFRA_ENV_ID and USDAPI_TOKEN variables are already provided. Note IBM Power only supports PXE, which also requires: You have installed grub2 at /var/lib/tftpboot You have installed DHCP and TFTP for PXE Procedure Download the iPXE script directly from the UI, or get the iPXE script from the Assisted Installer: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/downloads/files?file_name=ipxe-script > ipxe-script Example #!ipxe initrd --name initrd http://api.openshift.com/api/assisted-images/images/<infra_env_id>/pxe-initrd?arch=x86_64&image_token=<token_string>&version=4.10 kernel http://api.openshift.com/api/assisted-images/boot-artifacts/kernel?arch=x86_64&version=4.10 initrd=initrd coreos.live.rootfs_url=http://api.openshift.com/api/assisted-images/boot-artifacts/rootfs?arch=x86_64&version=4.10 random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8" boot Download the required artifacts by extracting URLs from the ipxe-script . Download the initial RAM disk: USD awk '/^initrd /{print USDNF}' ipxe-script | curl -o initrd.img Download the linux kernel: USD awk '/^kernel /{print USD2}' ipxe-script | curl -o kernel Download the root filesystem: USD grep ^kernel ipxe-script | xargs -n1| grep ^coreos.live.rootfs_url | cut -d = -f 2- | curl -o rootfs.img Change the URLs to the different artifacts in the ipxe-script` to match your local HTTP server. For example: #!ipxe set webserver http://192.168.0.1 initrd --name initrd USDwebserver/initrd.img kernel USDwebserver/kernel initrd=initrd coreos.live.rootfs_url=USDwebserver/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8" boot Optional: When installing with RHEL KVM on IBM zSystems you must boot the host by specifying additional kernel arguments random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8 Note If you install with iPXE on RHEL KVM, in some circumstances, the VMs on the VM host are not rebooted on first boot and need to be started manually. Optional: When installing on IBM Power you must download intramfs, kernel, and root as follows: Copy initrd.img and kernel.img to PXE directory `/var/lib/tftpboot/rhcos ` Copy rootfs.img to HTTPD directory `/var/www/html/install ` Add following entry to `/var/lib/tftpboot/boot/grub2/grub.cfg `: if [ USD{net_default_mac} == fa:1d:67:35:13:20 ]; then default=0 fallback=1 timeout=1 menuentry "CoreOS (BIOS)" { echo "Loading kernel" linux "/rhcos/kernel.img" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://9.114.98.8:8000/install/rootfs.img echo "Loading initrd" initrd "/rhcos/initrd.img" } fi | [
"dd if=<path_to_iso> of=<path_to_usb> status=progress",
"source refresh-token",
"curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.enabled_host_count'",
"curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'",
"[ \"1062663e-7989-8b2d-7fbb-e6f4d5bb28e5\" ]",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia",
"curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset",
"curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/downloads/files?file_name=ipxe-script > ipxe-script",
"#!ipxe initrd --name initrd http://api.openshift.com/api/assisted-images/images/<infra_env_id>/pxe-initrd?arch=x86_64&image_token=<token_string>&version=4.10 kernel http://api.openshift.com/api/assisted-images/boot-artifacts/kernel?arch=x86_64&version=4.10 initrd=initrd coreos.live.rootfs_url=http://api.openshift.com/api/assisted-images/boot-artifacts/rootfs?arch=x86_64&version=4.10 random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8\" boot",
"awk '/^initrd /{print USDNF}' ipxe-script | curl -o initrd.img",
"awk '/^kernel /{print USD2}' ipxe-script | curl -o kernel",
"grep ^kernel ipxe-script | xargs -n1| grep ^coreos.live.rootfs_url | cut -d = -f 2- | curl -o rootfs.img",
"#!ipxe set webserver http://192.168.0.1 initrd --name initrd USDwebserver/initrd.img kernel USDwebserver/kernel initrd=initrd coreos.live.rootfs_url=USDwebserver/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8\" boot",
"random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8",
"if [ USD{net_default_mac} == fa:1d:67:35:13:20 ]; then default=0 fallback=1 timeout=1 menuentry \"CoreOS (BIOS)\" { echo \"Loading kernel\" linux \"/rhcos/kernel.img\" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://9.114.98.8:8000/install/rootfs.img echo \"Loading initrd\" initrd \"/rhcos/initrd.img\" } fi"
] | https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2023/html/assisted_installer_for_openshift_container_platform/assembly_booting-hosts-with-the-discovery-image |
8.12. Verifying Network Configuration Teaming for Redundancy | 8.12. Verifying Network Configuration Teaming for Redundancy Network redundancy is a process when devices are used for backup purposes to prevent or recover from a failure of a specific system. The following procedure describes how to verify the network configuration for teaming in redundancy: Procedure Ping the destination IP from the team interface. For example: View which interface is in active mode: enp1s0 is the active interface. Temporarily remove the network cable from the host. Note There is no method to properly test link failure events using software utilities. Tools that deactivate connections, such as ip or nmcli , show only the driver's ability to handle port configuration changes and not actual link failure events. Check if the backup interface is up: enp2s0 is now the active interface. Check if you can still ping the destination IP from the team interface: | [
"~]# ping -I team0 DSTADDR",
"~]# teamdctl team0 state setup: runner: activebackup ports: enp1s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 enp2s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: enp1s0",
"~]# teamdctl team0 state setup: runner: activebackup ports: enp1s0 link watches: link summary: down instance[link_watch_0]: name: ethtool link: down down count: 1 enp2s0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: enp2s0",
"~]# ping -I team0 DSTADDR"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-verifying_network_configuration_teaming_for_redundancy |
Chapter 2. Understanding authentication | Chapter 2. Understanding authentication For users to interact with OpenShift Dedicated, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the OpenShift Dedicated API. The authorization layer then uses information about the requesting user to determine if the request is allowed. 2.1. Users A user in OpenShift Dedicated is an entity that can make requests to the OpenShift Dedicated API. An OpenShift Dedicated User object represents an actor which can be granted permissions in the system by adding roles to them or to their groups. Typically, this represents the account of a developer or administrator that is interacting with OpenShift Dedicated. Several types of users can exist: User type Description Regular users This is the way most interactive OpenShift Dedicated users are represented. Regular users are created automatically in the system upon first login or can be created via the API. Regular users are represented with the User object. Examples: joe alice System users Many of these are created automatically when the infrastructure is defined, mainly for the purpose of enabling the infrastructure to interact with the API securely. They include a cluster administrator (with access to everything), a per-node user, users for use by routers and registries, and various others. Finally, there is an anonymous system user that is used by default for unauthenticated requests. Examples: system:admin system:openshift-registry system:node:node1.example.com Service accounts These are special system users associated with projects; some are created automatically when the project is first created, while project administrators can create more for the purpose of defining access to the contents of each project. Service accounts are represented with the ServiceAccount object. Examples: system:serviceaccount:default:deployer system:serviceaccount:foo:builder Each user must authenticate in some way to access OpenShift Dedicated. API requests with no authentication or invalid authentication are authenticated as requests by the anonymous system user. After authentication, policy determines what the user is authorized to do. 2.2. Groups A user can be assigned to one or more groups , each of which represent a certain set of users. Groups are useful when managing authorization policies to grant permissions to multiple users at once, for example allowing access to objects within a project, versus granting them to users individually. In addition to explicitly defined groups, there are also system groups, or virtual groups , that are automatically provisioned by the cluster. The following default virtual groups are most important: Virtual group Description system:authenticated Automatically associated with all authenticated users. system:authenticated:oauth Automatically associated with all users authenticated with an OAuth access token. system:unauthenticated Automatically associated with all unauthenticated users. 2.3. API authentication Requests to the OpenShift Dedicated API are authenticated using the following methods: OAuth access tokens Obtained from the OpenShift Dedicated OAuth server using the <namespace_route> /oauth/authorize and <namespace_route> /oauth/token endpoints. Sent as an Authorization: Bearer... header. Sent as a websocket subprotocol header in the form base64url.bearer.authorization.k8s.io.<base64url-encoded-token> for websocket requests. X.509 client certificates Requires an HTTPS connection to the API server. Verified by the API server against a trusted certificate authority bundle. The API server creates and distributes certificates to controllers to authenticate themselves. Any request with an invalid access token or an invalid certificate is rejected by the authentication layer with a 401 error. If no access token or certificate is presented, the authentication layer assigns the system:anonymous virtual user and the system:unauthenticated virtual group to the request. This allows the authorization layer to determine which requests, if any, an anonymous user is allowed to make. 2.3.1. OpenShift Dedicated OAuth server The OpenShift Dedicated master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request. It then determines what user that identity maps to, creates an access token for that user, and returns the token for use. 2.3.1.1. OAuth token requests Every request for an OAuth token must specify the OAuth client that will receive and use the token. The following OAuth clients are automatically created when starting the OpenShift Dedicated API: OAuth client Usage openshift-browser-client Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. [1] openshift-challenging-client Requests tokens with a user-agent that can handle WWW-Authenticate challenges. <namespace_route> refers to the namespace route. This is found by running the following command: USD oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host All requests for OAuth tokens involve a request to <namespace_route>/oauth/authorize . Most authentication integrations place an authenticating proxy in front of this endpoint, or configure OpenShift Dedicated to validate credentials against a backing identity provider. Requests to <namespace_route>/oauth/authorize can come from user-agents that cannot display interactive login pages, such as the CLI. Therefore, OpenShift Dedicated supports authenticating using a WWW-Authenticate challenge in addition to interactive login flows. If an authenticating proxy is placed in front of the <namespace_route>/oauth/authorize endpoint, it sends unauthenticated, non-browser user-agents WWW-Authenticate challenges rather than displaying an interactive login page or redirecting to an interactive login flow. Note To prevent cross-site request forgery (CSRF) attacks against browser clients, only send Basic authentication challenges with if a X-CSRF-Token header is on the request. Clients that expect to receive Basic WWW-Authenticate challenges must set this header to a non-empty value. If the authenticating proxy cannot support WWW-Authenticate challenges, or if OpenShift Dedicated is configured to use an identity provider that does not support WWW-Authenticate challenges, you must use a browser to manually obtain a token from <namespace_route>/oauth/token/request . | [
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/authentication_and_authorization/understanding-authentication |
OperatorHub APIs | OperatorHub APIs OpenShift Container Platform 4.18 Reference guide for OperatorHub APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operatorhub_apis/index |
Chapter 5. Encrypting cinder volumes | Chapter 5. Encrypting cinder volumes You can use barbican to manage your Block Storage (cinder) encryption keys. This configuration uses LUKS to encrypt the disks attached to your instances, including boot disks. Key management is transparent to the user; when you create a new volume using luks as the encryption type, cinder generates a symmetric key secret for the volume and stores it in barbican. When booting the instance (or attaching an encrypted volume), nova retrieves the key from barbican and stores the secret locally as a Libvirt secret on the Compute node. Important Nova formats encrypted volumes during their first use if they are unencrypted. The resulting block device is then presented to the Compute node. Note If you intend to update any configuration files, be aware that certain OpenStack services now run within containers; this applies to keystone, nova, and cinder, among others. As a result, there are administration practices to consider: Do not update any configuration file you might find on the physical node's host operating system, for example, /etc/cinder/cinder.conf . The containerized service does not reference this file. Do not update the configuration file running within the container. Changes are lost once you restart the container. Instead, if you must change containerized services, update the configuration file in /var/lib/config-data/puppet-generated/ , which is used to generate the container. For example: keystone: /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf cinder: /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf nova: /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf Changes are applied after you restart the container. On nodes running the cinder-volume and nova-compute services, confirm that nova and cinder are both configured to use barbican for key management: Create a volume template that uses encryption. When you create new volumes they can be modeled off the settings you define here: Create a new volume and specify that it uses the LuksEncryptor-Template-256 settings: Note Ensure that the user creating the encrypted volume has the creator barbican role on the project. For more information, see the Grant user access to the creator role section. The resulting secret is automatically uploaded to the barbican backend. Use barbican to confirm that the disk encryption key is present. In this example, the timestamp matches the LUKS volume creation time: Attach the new volume to an existing instance. For example: The volume is then presented to the guest operating system and can be mounted using the built-in tools. 5.1. Migrate existing volume keys to Barbican Previously, deployments might have used ConfKeyManager to manage disk encryption keys. This meant that a fixed key was generated and then stored in the nova and cinder configuration files. The key IDs can be migrated to barbican using the following procedure. This utility works by scanning the databases for encryption_key_id entries within scope for migration to barbican. Each entry gets a new barbican key ID and the existing ConfKeyManager secret is retained. Note Previously, you could reassign ownership for volumes encrypted using ConfKeyManager . This is not possible for volumes that have their keys managed by barbican. Note Activating barbican will not break your existing keymgr volumes. After it is enabled, the migration process runs automatically, but it requires some configuration, described in the section. The actual migration runs in the cinder-volume and cinder-backup process, and you can track the progress in the cinder log files. cinder-volume - migrates keys stored in cinder's Volumes and Snapshots tables. cinder-backup - migrates keys in the Backups table. 5.1.1. Overview of the migration steps Deploy the barbican service. Add the creator role to the cinder service. For example: Restart the cinder-volume and cinder-backup services. cinder-volume and cinder-backup automatically begin the migration process. Monitor the logs for the message indicating migration has finished and check that no more volumes are using the ConfKeyManager all-zeros encryption key ID. Remove the fixed_key option from cinder.conf and nova.conf . You must determine which nodes have this setting configured. Remove the creator role from the cinder service. 5.1.2. Behavioral differences Barbican-managed encrypted volumes behave differently than volumes that use ConfKeyManager : You cannot transfer ownership of encrypted volumes, because it is not currently possible to transfer ownership of the barbican secret. Barbican is more restrictive about who is allowed to read and delete secrets, which can affect some cinder volume operations. For example, a user cannot attach, detach, or delete a different user's volumes. 5.1.3. Reviewing the migration process This section describes how you can view the status of the migration tasks. After you start the process, one of these entries appears in the logs. This indicates whether the migration started correctly, or it identifies the issue it encountered: Not migrating encryption keys because the ConfKeyManager is still in use. Not migrating encryption keys because the ConfKeyManager's fixed_key is not in use. Not migrating encryption keys because migration to the 'XXX' key_manager backend is not supported. - This message is unlikely to appear; it is a safety check to handle the code ever encountering another Key Manager backend other than barbican. This is because the code only supports one migration scenario: From ConfKeyManager to barbican. Not migrating encryption keys because there are no volumes associated with this host. - This may occur when cinder-volume is running on multiple hosts, and a particular host has no volumes associated with it. This arises because every host is responsible for handling its own volumes. Starting migration of ConfKeyManager keys. Migrating volume <UUID> encryption key to Barbican - During migration, all of the host's volumes are examined, and if a volume is still using the ConfKeyManager's key ID (identified by the fact that it's all zeros ( 00000000-0000-0000-0000-000000000000 )), then this message appears. For cinder-backup , this message uses slightly different capitalization: Migrating Volume [...] or Migrating Backup [...] After each host examines all of its volumes, the host displays a summary status message: You may also see the following entries: There are still %d volume(s) using the ConfKeyManager's all-zeros encryption key ID. There are still %d backup(s) using the ConfKeyManager's all-zeros encryption key ID. Note that both of these messages can appear in the cinder-volume and cinder-backup logs. Whereas each service only handles the migration of its own entries, the service is aware of the the other's status. As a result, cinder-volume knows if cinder-backup still has backups to migrate, and cinder-backup knows if the cinder-volume service has volumes to migrate. Although each host migrates only its own volumes, the summary message is based on a global assessment of whether any volume still requires migration This allows you to confirm that migration for all volumes is complete. Once you receive confirmation, remove the fixed_key setting from cinder.conf and nova.conf . See the Clean up the fixed keys section below for more information. 5.1.4. Troubleshooting the migration process 5.1.4.1. Role assignment The barbican secret can only be created when the requestor has the creator role. This means that the cinder service itself requires the creator role, otherwise a log sequence similar to this will occur: Starting migration of ConfKeyManager keys. Migrating volume <UUID> encryption key to Barbican Error migrating encryption key: Forbidden: Secret creation attempt not allowed - please review your user/project privileges There are still %d volume(s) using the ConfKeyManager's all-zeros encryption key ID. The key message is the third one: Secret creation attempt not allowed. To fix the problem, update the cinder account's privileges: Run openstack role add --project service --user cinder creator Restart the cinder-volume and cinder-backup services. As a result, the attempt at migration should succeed. 5.1.5. Clean up the fixed keys Important The encryption_key_id was only recently added to the Backup table, as part of the Queens release. As a result, pre-existing backups of encrypted volumes are likely to exist. The all-zeros encryption_key_id is stored on the backup itself, but it won't appear in the Backup database. As such, it is impossible for the migration process to know for certain whether a backup of an encrypted volume exists that still relies on the all-zeros ConfKeyMgr key ID. After migrating your key IDs into barbican, the fixed key remains in the configuration files. This may present a security concern to some users, because the fixed_key value is not encrypted in the .conf files. To address this, you can manually remove the fixed_key values from your nova and cinder configurations. However, first complete testing and review the output of the log file before you proceed, because disks that are still dependent on this value will not be accessible. Review the existing fixed_key values. The values must match for both services. Important Make a backup of the existing fixed_key values. This allows you to restore the value if something goes wrong, or if you need to restore a backup that uses the old encryption key. Delete the fixed_key values: 5.2. Automatic deletion of volume image encryption key The Block Storage service (cinder) creates an encryption key in the Key Management service (barbican) when it uploads an encrypted volume to the Image service (glance). This creates a 1:1 relationship between an encryption key and a stored image. Encryption key deletion prevents unlimited resource consumption of the Key Management service. The Block Storage, Key Management, and Image services automatically manage the key for an encrypted volume, including the deletion of the key. The Block Storage service automatically adds two properties to a volume image: cinder_encryption_key_id - The identifier of the encryption key that the Key Management service stores for a specific image. cinder_encryption_key_deletion_policy - The policy that tells the Image service to tell the Key Management service whether to delete the key associated with this image. Important The values of these properties are automatically assigned. To avoid unintentional data loss, do not adjust these values . When you create a volume image, the Block Storage service sets the cinder_encryption_key_deletion_policy property to on_image_deletion . When you delete a volume image, the Image service deletes the corresponding encryption key if the cinder_encryption_key_deletion_policy equals on_image_deletion . Important Red Hat does not recommend manual manipulation of the cinder_encryption_key_id or cinder_encryption_key_deletion_policy properties. If you use the encryption key that is identified by the value of cinder_encryption_key_id for any other purpose, you risk data loss. | [
"crudini --get /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf key_manager backend castellan.key_manager.barbican_key_manager.BarbicanKeyManager crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf key_manager backend castellan.key_manager.barbican_key_manager.BarbicanKeyManager",
"openstack volume type create --encryption-provider nova.volume.encryptors.luks.LuksEncryptor --encryption-cipher aes-xts-plain64 --encryption-key-size 256 --encryption-control-location front-end LuksEncryptor-Template-256 +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | description | None | | encryption | cipher='aes-xts-plain64', control_location='front-end', encryption_id='9df604d0-8584-4ce8-b450-e13e6316c4d3', key_size='256', provider='nova.volume.encryptors.luks.LuksEncryptor' | | id | 78898a82-8f4c-44b2-a460-40a5da9e4d59 | | is_public | True | | name | LuksEncryptor-Template-256 | +-------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+",
"openstack volume create --size 1 --type LuksEncryptor-Template-256 'Encrypted-Test-Volume' +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-01-22T00:19:06.000000 | | description | None | | encrypted | True | | id | a361fd0b-882a-46cc-a669-c633630b5c93 | | migration_status | None | | multiattach | False | | name | Encrypted-Test-Volume | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | LuksEncryptor-Template-256 | | updated_at | None | | user_id | 0e73cb3111614365a144e7f8f1a972af | +---------------------+--------------------------------------+",
"openstack secret list +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | Secret href | Name | Created | Status | Content types | Algorithm | Bit length | Secret type | Mode | Expiration | +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+ | https://192.168.123.169:9311/v1/secrets/24845e6d-64a5-4071-ba99-0fdd1046172e | None | 2018-01-22T02:23:15+00:00 | ACTIVE | {u'default': u'application/octet-stream'} | aes | 256 | symmetric | None | None | +------------------------------------------------------------------------------------+------+---------------------------+--------+-------------------------------------------+-----------+------------+-------------+------+------------+",
"openstack server add volume testInstance Encrypted-Test-Volume",
"#openstack role create creator #openstack role add --user cinder creator --project service",
"`No volumes are using the ConfKeyManager's encryption_key_id.` `No backups are known to be using the ConfKeyManager's encryption_key_id.`",
"crudini --get /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf keymgr fixed_key crudini --get /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf keymgr fixed_key",
"crudini --del /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf keymgr fixed_key crudini --del /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf keymgr fixed_key"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/manage_secrets_with_openstack_key_manager/encrypting_cinder_volumes |
Chapter 4. Configuring persistent storage | Chapter 4. Configuring persistent storage 4.1. Persistent storage using AWS Elastic Block Store OpenShift Dedicated clusters are prebuilt with four storage classes that use Amazon Elastic Block Store (Amazon EBS) volumes. These storage classes are ready to use and some familiarity with Kubernetes and AWS is assumed. Following are the four prebuilt storage classes: Name Provisioner gp2 kubernetes.io/aws-ebs gp2-csi ebs.csi.aws.com gp3 (default) kubernetes.io/aws-ebs gp3-csi ebs.csi.aws.com The gp3 storage class is set as default; however, you can select any of the storage classes as the default storage class. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can dynamically provision Amazon EBS volumes. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Dedicated cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. You can define a KMS key to encrypt container-persistent volumes on AWS. By default, newly created clusters using OpenShift Dedicated version 4.10 and later use gp3 storage and the AWS EBS CSI driver . Important High-availability of storage in the infrastructure is left to the underlying storage provider. 4.1.1. Creating the EBS storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.1.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Dedicated. Procedure In the OpenShift Dedicated console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the previously-created storage class from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This selection determines the read and write access for the storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.1.3. Volume format Before OpenShift Dedicated mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This verification enables you to use unformatted AWS volumes as persistent volumes, because OpenShift Dedicated formats them before the first use. 4.1.4. Maximum number of EBS volumes on a node By default, OpenShift Dedicated supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits . The volume limit depends on the instance type. Important As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes, which means you could have up to 39 EBS volumes of each type. For information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins, see AWS Elastic Block Store CSI Driver Operator . 4.1.5. Encrypting container persistent volumes on AWS with a KMS key Defining a KMS key to encrypt container-persistent volumes on AWS is useful when you have explicit compliance and security guidelines when deploying to AWS. Prerequisites Underlying infrastructure must contain storage. You must create a customer KMS key on AWS. Procedure Create a storage class: USD cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: "true" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF 1 Specifies the name of the storage class. 2 File system that is created on provisioned volumes. 3 Specifies the full Amazon Resource Name (ARN) of the key to use when encrypting the container-persistent volume. If you do not provide any key, but the encrypted field is set to true , then the default KMS key is used. See Finding the key ID and key ARN on AWS in the AWS documentation. Create a persistent volume claim (PVC) with the storage class specifying the KMS key: USD cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF Create workload containers to consume the PVC: USD cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF 4.1.6. Additional resources See AWS Elastic Block Store CSI Driver Operator for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 4.2. Persistent storage using GCE Persistent Disk OpenShift Dedicated supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Dedicated cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. GCE Persistent Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Dedicated cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources GCE Persistent Disk 4.2.1. Creating the GCE storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.2.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Dedicated. Procedure In the OpenShift Dedicated console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the previously-created storage class from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This selection determines the read and write access for the storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.2.3. Volume format Before OpenShift Dedicated mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This verification enables you to use unformatted GCE volumes as persistent volumes, because OpenShift Dedicated formats them before the first use. | [
"cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: \"true\" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF",
"cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF",
"cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/storage/configuring-persistent-storage |
10.5. Customizing Desktop Backgrounds | 10.5. Customizing Desktop Backgrounds Using the dconf utility, you can configure the default background, add extra backgrounds, or add multiple backgrounds. If the users of the system will not be permitted to change these settings from the defaults, then system administrators need to lock the settings using the locks directory. Otherwise each user will be able to customize the setting to suit their own preferences. For more information, see Section 9.5.1, "Locking Down Specific Settings" . 10.5.1. Customizing the Default Desktop Background You can configure the default desktop background and its appearance by setting the relevant GSettings keys in the org.gnome.desktop.background schema. For more information about GSettings, see Chapter 9, Configuring Desktop with GSettings and dconf . Procedure 10.10. Setting the Default Background Create a local database for machine-wide settings in /etc/dconf/db/local.d/ 00-background : Override the user's setting to prevent the user from changing it in /etc/dconf/db/local.d/locks/background : For more information, see Section 9.5.1, "Locking Down Specific Settings" . Update the system databases: Users must log out and back in again before the system-wide settings take effect. 10.5.2. Adding Extra Backgrounds You can make extra backgrounds available to users on your system. Create a filename .xml file (there are no requirements for file names) specifying your extra background's appearance using the org.gnome.desktop.background schemas . Here is a list of the most frequently used schemas: Table 10.1. org.gnome.desktop.background schemas GSettings Keys Key Name Possible Values Description picture-options "none", "wallpaper", "centered", "scaled", "stretched", "zoom", "spanned" Determines how the image set by wallpaper_filename is rendered. color-shading-type "horizontal", "vertical", and "solid" How to shade the background color. primary-color default: #023c88 Left or Top color when drawing gradients, or the solid color. secondary-color default: #5789ca Right or Bottom color when drawing gradients, not used for solid color. The full range of options is to be found in the dconf-editor GUI or gsettings command-line utility. For more information, see Section 9.3, "Browsing GSettings Values for Desktop Applications" . Store the filename .xml file in the /usr/share/gnome-background-properties/ directory. When the user clicks their name in the top right corner, chooses Settings , and in the Personal section of the table selects Background , they will see the new background available. Look at the example and see how org.gnome.desktop.background GSettings keys are implemented practically: Example 10.4. Extra Backgrounds File <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE wallpapers SYSTEM "gnome-wp-list.dtd"> <wallpapers> <wallpaper deleted="false"> <name>Company Background</name> <name xml:lang="de">Firmenhintergrund</name> <filename>/usr/local/share/backgrounds/company-wallpaper.jpg</filename> <options>zoom</options> <shade_type>solid</shade_type> <pcolor>#ffffff</pcolor> <scolor>#000000</scolor> </wallpaper> </wallpapers> In one configuration file, you can specify multiple <wallpaper> elements to add more backgrounds. See the following example which shows an .xml file with two <wallpaper> elements, adding two different backgrounds: Example 10.5. Extra Backgrounds File with Two Wallpaper Elements <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE wallpapers SYSTEM "gnome-wp-list.dtd"> <wallpapers> <wallpaper deleted="false"> <name>Company Background</name> <name xml:lang="de">Firmenhintergrund</name> <filename>/usr/local/share/backgrounds/company-wallpaper.jpg</filename> <options>zoom</options> <shade_type>solid</shade_type> <pcolor>#ffffff</pcolor> <scolor>#000000</scolor> </wallpaper> <wallpaper deleted="false"> <name>Company Background 2</name> <name xml:lang="de">Firmenhintergrund 2</name> <filename>/usr/local/share/backgrounds/company-wallpaper-2.jpg</filename> <options>zoom</options> <shade_type>solid</shade_type> <pcolor>#ff0000</pcolor> <scolor>#00ffff</scolor> </wallpaper> </wallpapers> 10.5.3. Setting the Screen Shield Screen Shield is the screen that quickly slides down when the system is locked. It is controlled by the org.gnome.desktop.screensaver.picture-uri GSettings key. Since GDM uses its own dconf profile, you can set the default background by changing the settings in that profile. For more information about GSettings and dconf , see Chapter 9, Configuring Desktop with GSettings and dconf . Procedure 10.11. Adding a Logo to the Screen Shield Create a gdm database for machine-wide settings in /etc/dconf/db/gdm.d/ 01-screensaver : Replace /opt/corp/background.jpg with the path to the image file you want to use as the Screen Shield. Supported formats are PNG, JPG, JPEG, and TGA. The image will be scaled if necessary to fit the screen. Update the system databases: You must log out before the system-wide settings take effect. time you lock the screen, the new Screen Shield will show in the background. In the foreground, time, date and the current day of the week will be displayed. 10.5.3.1. What If the Screen Shield Does Not Update? Make sure that you have run the dconf update command as root to update the system databases. In case the background does not update, try restarting GDM . For more information, see Section 14.1.1, "Restarting GDM" . | [
"Specify the dconf path Specify the path to the desktop background image file picture-uri='file:///usr/local/share/backgrounds/wallpaper.jpg' Specify one of the rendering options for the background image: 'none', 'wallpaper', 'centered', 'scaled', 'stretched', 'zoom', 'spanned' picture-options='scaled' Specify the left or top color when drawing gradients or the solid color primary-color='000000' Specify the right or bottom color when drawing gradients secondary-color='FFFFFF'",
"List the keys used to configure the desktop background /org/gnome/desktop/background/picture-uri /org/gnome/desktop/background/picture-options /org/gnome/desktop/background/primary-color /org/gnome/desktop/background/secondary-color",
"dconf update",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <!DOCTYPE wallpapers SYSTEM \"gnome-wp-list.dtd\"> <wallpapers> <wallpaper deleted=\"false\"> <name>Company Background</name> <name xml:lang=\"de\">Firmenhintergrund</name> <filename>/usr/local/share/backgrounds/company-wallpaper.jpg</filename> <options>zoom</options> <shade_type>solid</shade_type> <pcolor>#ffffff</pcolor> <scolor>#000000</scolor> </wallpaper> </wallpapers>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <!DOCTYPE wallpapers SYSTEM \"gnome-wp-list.dtd\"> <wallpapers> <wallpaper deleted=\"false\"> <name>Company Background</name> <name xml:lang=\"de\">Firmenhintergrund</name> <filename>/usr/local/share/backgrounds/company-wallpaper.jpg</filename> <options>zoom</options> <shade_type>solid</shade_type> <pcolor>#ffffff</pcolor> <scolor>#000000</scolor> </wallpaper> <wallpaper deleted=\"false\"> <name>Company Background 2</name> <name xml:lang=\"de\">Firmenhintergrund 2</name> <filename>/usr/local/share/backgrounds/company-wallpaper-2.jpg</filename> <options>zoom</options> <shade_type>solid</shade_type> <pcolor>#ff0000</pcolor> <scolor>#00ffff</scolor> </wallpaper> </wallpapers>",
"[org/gnome/desktop/screensaver] picture-uri=' file:///opt/corp/background.jpg '",
"dconf update"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/customize-desktop-backgrounds |
Chapter 4. Protect a web application by using OpenID Connect (OIDC) authorization code flow | Chapter 4. Protect a web application by using OpenID Connect (OIDC) authorization code flow Discover how to secure application HTTP endpoints by using the Quarkus OpenID Connect (OIDC) authorization code flow mechanism with the Quarkus OIDC extension, providing robust authentication and authorization. For more information, see OIDC code flow mechanism for protecting web applications . To learn about how well-known social providers such as Apple, Facebook, GitHub, Google, Mastodon, Microsoft, Spotify, Twitch, and X (formerly Twitter) can be used with Quarkus OIDC, see Configuring well-known OpenID Connect providers . See also, Authentication mechanisms in Quarkus . If you want to protect your service applications by using OIDC Bearer token authentication, see OIDC Bearer token authentication . 4.1. Prerequisites To complete this guide, you need: Roughly 15 minutes An IDE JDK 17+ installed with JAVA_HOME configured appropriately Apache Maven 3.8.6 or later A working container runtime (Docker or Podman ) Optionally the Quarkus CLI if you want to use it Optionally Mandrel or GraalVM installed and configured appropriately if you want to build a native executable (or Docker if you use a native container build) 4.2. Architecture In this example, we build a simple web application with a single page: /index.html This page is protected, and only authenticated users can access it. 4.3. Solution Follow the instructions in the sections and create the application step by step. Alternatively, you can go right to the completed example. Clone the Git repository by running the git clone https://github.com/quarkusio/quarkus-quickstarts.git -b 3.15 command. Alternatively, download an archive . The solution is located in the security-openid-connect-web-authentication-quickstart directory . 4.4. Create the Maven project First, we need a new project. Create a new project by running the following command: Using the Quarkus CLI: quarkus create app org.acme:security-openid-connect-web-authentication-quickstart \ --extension='rest,oidc' \ --no-code cd security-openid-connect-web-authentication-quickstart To create a Gradle project, add the --gradle or --gradle-kotlin-dsl option. For more information about how to install and use the Quarkus CLI, see the Quarkus CLI guide. Using Maven: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.1:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=security-openid-connect-web-authentication-quickstart \ -Dextensions='rest,oidc' \ -DnoCode cd security-openid-connect-web-authentication-quickstart To create a Gradle project, add the -DbuildTool=gradle or -DbuildTool=gradle-kotlin-dsl option. For Windows users: If using cmd, (don't use backward slash \ and put everything on the same line) If using Powershell, wrap -D parameters in double quotes e.g. "-DprojectArtifactId=security-openid-connect-web-authentication-quickstart" If you already have your Quarkus project configured, you can add the oidc extension to your project by running the following command in your project base directory: Using the Quarkus CLI: quarkus extension add oidc Using Maven: ./mvnw quarkus:add-extension -Dextensions='oidc' Using Gradle: ./gradlew addExtension --extensions='oidc' This adds the following dependency to your build file: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency> Using Gradle: implementation("io.quarkus:quarkus-oidc") 4.5. Write the application Let's write a simple Jakarta REST resource that has all the tokens returned in the authorization code grant response injected: package org.acme.security.openid.connect.web.authentication; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.jwt.Claims; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.IdToken; import io.quarkus.oidc.RefreshToken; @Path("/tokens") public class TokenResource { /** * Injection point for the ID token issued by the OpenID Connect provider */ @Inject @IdToken JsonWebToken idToken; /** * Injection point for the access token issued by the OpenID Connect provider */ @Inject JsonWebToken accessToken; /** * Injection point for the refresh token issued by the OpenID Connect provider */ @Inject RefreshToken refreshToken; /** * Returns the tokens available to the application. * This endpoint exists only for demonstration purposes. * Do not expose these tokens in a real application. * * @return an HTML page containing the tokens available to the application. */ @GET @Produces("text/html") public String getTokens() { StringBuilder response = new StringBuilder().append("<html>") .append("<body>") .append("<ul>"); Object userName = this.idToken.getClaim(Claims.preferred_username); if (userName != null) { response.append("<li>username: ").append(userName.toString()).append("</li>"); } Object scopes = this.accessToken.getClaim("scope"); if (scopes != null) { response.append("<li>scopes: ").append(scopes.toString()).append("</li>"); } response.append("<li>refresh_token: ").append(refreshToken.getToken() != null).append("</li>"); return response.append("</ul>").append("</body>").append("</html>").toString(); } } This endpoint has ID, access, and refresh tokens injected. It returns a preferred_username claim from the ID token, a scope claim from the access token, and a refresh token availability status. You only need to inject the tokens if the endpoint needs to use the ID token to interact with the currently authenticated user or use the access token to access a downstream service on behalf of this user. For more information, see the Access ID and Access Tokens section of the reference guide. 4.6. Configure the application The OIDC extension allows you to define the configuration by using the application.properties file in the src/main/resources directory. %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.http.auth.permission.authenticated.paths=/* quarkus.http.auth.permission.authenticated.policy=authenticated This is the simplest configuration you can have when enabling authentication to your application. The quarkus.oidc.client-id property references the client_id issued by the OIDC provider, and the quarkus.oidc.credentials.secret property sets the client secret. The quarkus.oidc.application-type property is set to web-app to tell Quarkus that you want to enable the OIDC authorization code flow so that your users are redirected to the OIDC provider to authenticate. Finally, the quarkus.http.auth.permission.authenticated permission is set to tell Quarkus about the paths you want to protect. In this case, all paths are protected by a policy that ensures only authenticated users can access them. For more information, see Security Authorization Guide . Note When you do not configure a client secret with quarkus.oidc.credentials.secret , it is recommended to configure quarkus.oidc.token-state-manager.encryption-secret . The quarkus.oidc.token-state-manager.encryption-secret enables the default token state manager to encrypt the user tokens in a browser cookie. If this key is not defined, and the quarkus.oidc.credentials.secret fallback is not configured, Quarkus uses a random key. A random key causes existing logins to be invalidated either on application restart or in environment with multiple instances of your application. Alternatively, encryption can also be disabled by setting quarkus.oidc.token-state-manager.encryption-required to false . However, you should disable secret encryption in development environments only. The encryption secret is recommended to be 32 chars long. For example, quarkus.oidc.token-state-manager.encryption-secret=AyM1SysPpbyDfgZld3umj1qzKObwVMk 4.7. Start and configure the Keycloak server To start a Keycloak server, use Docker and run the following command: docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev where keycloak.version is set to 25.0.6 or later. You can access your Keycloak Server at localhost:8180 . To access the Keycloak Administration Console, log in as the admin user. The username and password are both admin . To create a new realm, import the realm configuration file . For more information, see the Keycloak documentation about how to create and configure a new realm . 4.8. Run the application in dev and JVM modes To run the application in dev mode, use: Using the Quarkus CLI: quarkus dev Using Maven: ./mvnw quarkus:dev Using Gradle: ./gradlew --console=plain quarkusDev After exploring the application in dev mode, you can run it as a standard Java application. First, compile it: Using the Quarkus CLI: quarkus build Using Maven: ./mvnw install Using Gradle: ./gradlew build Then, run it: java -jar target/quarkus-app/quarkus-run.jar 4.9. Run the application in Native mode This same demo can be compiled into native code. No modifications are required. This implies that you no longer need to install a JVM on your production environment, as the runtime technology is included in the produced binary and optimized to run with minimal resources. Compilation takes longer, so this step is turned off by default. You can build again by enabling the native build: Using the Quarkus CLI: quarkus build --native Using Maven: ./mvnw install -Dnative Using Gradle: ./gradlew build -Dquarkus.native.enabled=true After a while, you can run this binary directly: ./target/security-openid-connect-web-authentication-quickstart-runner 4.10. Test the application To test the application, open your browser and access the following URL: http://localhost:8080/tokens If everything works as expected, you are redirected to the Keycloak server to authenticate. To authenticate to the application, enter the following credentials at the Keycloak login page: Username: alice Password: alice After clicking the Login button, you are redirected back to the application, and a session cookie will be created. The session for this demo is valid for a short period of time and, on every page refresh, you will be asked to re-authenticate. For information about how to increase the session timeouts, see the Keycloak session timeout documentation. For example, you can access the Keycloak Admin console directly from the dev UI by clicking the Keycloak Admin link if you use Dev Services for Keycloak in dev mode: For more information about writing the integration tests that depend on Dev Services for Keycloak , see the Dev Services for Keycloak section. 4.11. Summary You have learned how to set up and use the OIDC authorization code flow mechanism to protect and test application HTTP endpoints. After you have completed this tutorial, explore OIDC Bearer token authentication and other authentication mechanisms . 4.12. References Quarkus Security overview OIDC code flow mechanism for protecting web applications Configuring well-known OpenID Connect providers OpenID Connect and OAuth2 Client and Filters reference guide Dev Services for Keycloak Sign and encrypt JWT tokens with SmallRye JWT Build Choosing between OpenID Connect, SmallRye JWT, and OAuth2 authentication mechanisms Keycloak Documentation Protect Quarkus web application by using Auth0 OpenID Connect provider OpenID Connect JSON Web Token | [
"quarkus create app org.acme:security-openid-connect-web-authentication-quickstart --extension='rest,oidc' --no-code cd security-openid-connect-web-authentication-quickstart",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.1:create -DprojectGroupId=org.acme -DprojectArtifactId=security-openid-connect-web-authentication-quickstart -Dextensions='rest,oidc' -DnoCode cd security-openid-connect-web-authentication-quickstart",
"quarkus extension add oidc",
"./mvnw quarkus:add-extension -Dextensions='oidc'",
"./gradlew addExtension --extensions='oidc'",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc</artifactId> </dependency>",
"implementation(\"io.quarkus:quarkus-oidc\")",
"package org.acme.security.openid.connect.web.authentication; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import org.eclipse.microprofile.jwt.Claims; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.IdToken; import io.quarkus.oidc.RefreshToken; @Path(\"/tokens\") public class TokenResource { /** * Injection point for the ID token issued by the OpenID Connect provider */ @Inject @IdToken JsonWebToken idToken; /** * Injection point for the access token issued by the OpenID Connect provider */ @Inject JsonWebToken accessToken; /** * Injection point for the refresh token issued by the OpenID Connect provider */ @Inject RefreshToken refreshToken; /** * Returns the tokens available to the application. * This endpoint exists only for demonstration purposes. * Do not expose these tokens in a real application. * * @return an HTML page containing the tokens available to the application. */ @GET @Produces(\"text/html\") public String getTokens() { StringBuilder response = new StringBuilder().append(\"<html>\") .append(\"<body>\") .append(\"<ul>\"); Object userName = this.idToken.getClaim(Claims.preferred_username); if (userName != null) { response.append(\"<li>username: \").append(userName.toString()).append(\"</li>\"); } Object scopes = this.accessToken.getClaim(\"scope\"); if (scopes != null) { response.append(\"<li>scopes: \").append(scopes.toString()).append(\"</li>\"); } response.append(\"<li>refresh_token: \").append(refreshToken.getToken() != null).append(\"</li>\"); return response.append(\"</ul>\").append(\"</body>\").append(\"</html>\").toString(); } }",
"%prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.http.auth.permission.authenticated.paths=/* quarkus.http.auth.permission.authenticated.policy=authenticated",
"docker run --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin -p 8180:8080 quay.io/keycloak/keycloak:{keycloak.version} start-dev",
"quarkus dev",
"./mvnw quarkus:dev",
"./gradlew --console=plain quarkusDev",
"quarkus build",
"./mvnw install",
"./gradlew build",
"java -jar target/quarkus-app/quarkus-run.jar",
"quarkus build --native",
"./mvnw install -Dnative",
"./gradlew build -Dquarkus.native.enabled=true",
"./target/security-openid-connect-web-authentication-quickstart-runner"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/openid_connect_oidc_authentication/security-oidc-code-flow-authentication-tutorial |
Chapter 2. Building a RHEL image with custom repositories | Chapter 2. Building a RHEL image with custom repositories Start by using the Insights image builder application to build a RHEL for Edge image. RHEL for Edge is optimized for edge computing to provide faster data delivery. It also enables updates with OSTree , which is a git-like model of updates that sends only the changes in your update, so that updates are quick and rollback is easy. Optionally, you can add packages from custom repositories outside of Red Hat to customize your RHEL for Edge image and enable it with the features and packages that you need for your business. Warning Using RHEL for Edge customized images that were created using the on-premise version of RHEL image builder is not supported by the Insights image builder application. For more details, see Edge images supportability . Prerequisites You must have a Red Hat Hybrid Cloud Console account. Procedure Access Red Hat Hybrid Cloud Console platform and log in. From the console dashboard, navigate to Red Hat Insights > RHEL > Inventory > Images . The Insights image builder environment opens. In the image builder application, click the Immutable (OSTree) tab. Click Create new image . On the Create image wizard, follow the steps: On the Details page, enter an image name and click . On the Options page, select the following details: Select the image base release for your image. Select the option RHEL for Edge Installer (.iso) . Click . On the System registration page, follow the steps: Enter a username. This is the username to log in to your system after it is created. Enter a public SSH key to create a user for your image. Click . On the Activation Keys page, follow the steps: From the dropdown menu, choose an activation key to use for the image. See the Creating an activation key documentation for more details. You can manage the activation key that you chose by accessing Activation keys in the Console. Click . On the Content page: On the Additional Red Hat packages page, add any core RHEL package, for example, emacs . In the Available packages search field, enter emacs and click the search icon. Select emacs from the search results. Click the Add Selected or Add all arrow button to add emacs to the list. Click . On the Custom repositories page: Select the custom repository you added. Click . On the Review page: Check the data is correct and click Create image . Verification The image may take some minutes to build. After your image is created, you can see it on the Images page. You can see the following image details: Name Version Created Type Release Status Created or Updated Description Packages Changes from version User information, such as username and ssh key. Note Every time that you created an updated version of the image, the system keeps using the same activation key that you previously added to the image. To change the activation key, you need to manually registry the system again. Steps After you build the image, install it on your device. When you run the Subscription Manager tool, you can see that the device was registered with the Activation key that you chose. If you did not choose an activation key, then you can proceed to register your system with the Red Hat Remote Host Configuration (rhc) client. Additional resources How to start the upgrade process of an Edge Management deployed system . Edge images supportability . | null | https://docs.redhat.com/en/documentation/edge_management/1-latest/html/create_rhel_for_edge_images_and_configure_automated_management/proc-rhem-build-image |
Chapter 2. Planning your undercloud | Chapter 2. Planning your undercloud Before you configure and install director on the undercloud, you must plan your undercloud host to ensure it meets certain requirements. 2.1. Containerized undercloud The undercloud is the node that controls the configuration, installation, and management of your final Red Hat OpenStack Platform (RHOSP) environment, which is called the overcloud. The undercloud runs each RHOSP component service as a container. The undercloud uses these containerized services to create a toolset named director, which you use to create and manage your overcloud. Since both the undercloud and overcloud use containers, both use the same architecture to pull, configure, and run containers. This architecture is based on the OpenStack Orchestration service (heat) for provisioning nodes and uses Ansible to configure services and containers. It is useful to have some familiarity with heat and Ansible to help you troubleshoot issues that you might encounter. 2.2. Preparing your undercloud networking The undercloud requires access to two main networks: The Provisioning or Control Plane network , which is the network that director uses to provision your nodes and access them over SSH when executing Ansible configuration. This network also enables SSH access from the undercloud to overcloud nodes. The undercloud contains DHCP services for introspection and provisioning other nodes on this network, which means that no other DHCP services should exist on this network. The director configures the interface for this network. The External network , which enables access to OpenStack Platform repositories, container image sources, and other servers such as DNS servers or NTP servers. Use this network for standard access the undercloud from your workstation. You must manually configure an interface on the undercloud to access the external network. The undercloud requires a minimum of 2 x 1 Gbps Network Interface Cards: one for the Provisioning or Control Plane network and one for the External network . When you plan your network, review the following guidelines: Red Hat recommends using one network for provisioning and the control plane and another network for the data plane. Do not create provisioning and the control plane networks on top of an OVS bridge. The provisioning and control plane network can be configured on top of a Linux bond or on individual interfaces. If you use a Linux bond, configure it as an active-backup bond type. On non-controller nodes, the amount of traffic is relatively low on provisioning and control plane networks, and they do not require high bandwidth or load balancing. On Controllers, the provisioning and control plane networks need additional bandwidth. The reason for increased bandwidth is that Controllers serve many nodes in other roles. More bandwidth is also required when frequent changes are made to the environment. For best performance, Controllers with more than 50 compute nodes- or if more than four bare metal nodes are provisioned simultaneously- should have 4-10 times the bandwidth than the interfaces on the non-controller nodes. The undercloud should have a higher bandwidth connection to the provisioning network when more than 50 overcloud nodes are provisioned. Do not use the same Provisioning or Control Plane NIC as the one that you use to access the director machine from your workstation. The director installation creates a bridge by using the Provisioning NIC, which drops any remote connections. Use the External NIC for remote connections to the director system. The Provisioning network requires an IP range that fits your environment size. Use the following guidelines to determine the total number of IP addresses to include in this range: Include at least one temporary IP address for each node that connects to the Provisioning network during introspection. Include at least one permanent IP address for each node that connects to the Provisioning network during deployment. Include an extra IP address for the virtual IP of the overcloud high availability cluster on the Provisioning network. Include additional IP addresses within this range for scaling the environment. To prevent a Controller node network card or network switch failure disrupting overcloud services availability, ensure that the keystone admin endpoint is located on a network that uses bonded network cards or networking hardware redundancy. If you move the keystone endpoint to a different network, such as internal_api , ensure that the undercloud can reach the VLAN or subnet. For more information, see the Red Hat Knowledgebase solution How to migrate Keystone Admin Endpoint to internal_api network . 2.3. Determining environment scale Before you install the undercloud, determine the scale of your environment. Include the following factors when you plan your environment: How many nodes do you want to deploy in your overcloud? The undercloud manages each node within an overcloud. Provisioning overcloud nodes consumes resources on the undercloud. You must provide your undercloud with enough resources to adequately provision and control all of your overcloud nodes. How many simultaneous operations do you want the undercloud to perform? Most OpenStack services on the undercloud use a set of workers. Each worker performs an operation specific to that service. Multiple workers provide simultaneous operations. The default number of workers on the undercloud is determined by halving the total CPU thread count on the undercloud. In this instance, thread count refers to the number of CPU cores multiplied by the hyper-threading value. For example, if your undercloud has a CPU with 16 threads, then the director services spawn 8 workers by default. Director also uses a set of minimum and maximum caps by default: Service Minimum Maximum OpenStack Orchestration (heat) 4 24 All other service 2 12 The undercloud has the following minimum CPU and memory requirements: An 8-thread 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. This provides 4 workers for each undercloud service. A minimum of 24 GB of RAM. The ceph-ansible playbook consumes 1 GB resident set size (RSS) for every 10 hosts that the undercloud deploys. If you want to use a new or existing Ceph cluster in your deployment, you must provision the undercloud RAM accordingly. To use a larger number of workers, increase the vCPUs and memory of your undercloud using the following recommendations: Minimum: Use 1.5 GB of memory for each thread. For example, a machine with 48 threads requires 72 GB of RAM to provide the minimum coverage for 24 heat workers and 12 workers for other services. Recommended: Use 3 GB of memory for each thread. For example, a machine with 48 threads requires 144 GB of RAM to provide the recommended coverage for 24 heat workers and 12 workers for other services. 2.4. Undercloud disk sizing The recommended minimum undercloud disk size is 100 GB of available disk space on the root disk: 20 GB for container images 10 GB to accommodate QCOW2 image conversion and caching during the node provisioning process 70 GB+ for general usage, logging, metrics, and growth 2.5. Virtualization support Red Hat only supports a virtualized undercloud on the following platforms: Platform Notes Kernel-based Virtual Machine (KVM) Hosted by Red Hat Enterprise Linux 8.4, as listed on certified hypervisors. Red Hat Virtualization Hosted by Red Hat Virtualization 4.x, as listed on certified hypervisors. Microsoft Hyper-V Hosted by versions of Hyper-V as listed on the Red Hat Customer Portal Certification Catalogue . VMware ESX and ESXi Hosted by versions of ESX and ESXi as listed on the Red Hat Customer Portal Certification Catalogue . Important Ensure that your hypervisor supports Red Hat Enterprise Linux 8.4 guests. Virtual machine requirements Resource requirements for a virtual undercloud are similar to those of a bare-metal undercloud. Consider the various tuning options when provisioning such as network model, guest CPU capabilities, storage backend, storage format, and caching mode. Network considerations Power management The undercloud virtual machine (VM) requires access to the overcloud nodes' power management devices. This is the IP address set for the pm_addr parameter when registering nodes. Provisioning network The NIC used for the provisioning network, ctlplane , requires the ability to broadcast and serve DHCP requests to the NICs of the overcloud's bare-metal nodes. Create a bridge that connects the VM's NIC to the same network as the bare metal NICs. Allow traffic from an unknown address You must configure your virtual undercloud hypervisor to prevent the hypervisor blocking the undercloud from transmitting traffic from an unknown address. The configuration depends on the platform you are using for your virtual undercloud: Red Hat Enterprise Virtualization: Disable the anti-mac-spoofing parameter. VMware ESX or ESXi: On IPv4 ctlplane network: Allow forged transmits. On IPv6 ctlplane network: Allow forged transmits, MAC address changes, and promiscuous mode operation. For more information about how to configure VMware ESX or ESXi, see Securing vSphere Standard Switches on the VMware docs website. You must power off and on the director VM after you apply these settings. Rebooting the VM is not sufficient. 2.6. Character encoding configuration Red Hat OpenStack Platform has special character encoding requirements as part of the locale settings: Use UTF-8 encoding on all nodes. Ensure the LANG environment variable is set to en_US.UTF-8 on all nodes. Avoid using non-ASCII characters if you use Red Hat Ansible Tower to automate the creation of Red Hat OpenStack Platform resources. 2.7. Considerations when running the undercloud with a proxy Running the undercloud with a proxy has certain limitations, and Red Hat recommends that you use Red Hat Satellite for registry and package management. However, if your environment uses a proxy, review these considerations to best understand the different configuration methods of integrating parts of Red Hat OpenStack Platform with a proxy and the limitations of each method. System-wide proxy configuration Use this method to configure proxy communication for all network traffic on the undercloud. To configure the proxy settings, edit the /etc/environment file and set the following environment variables: http_proxy The proxy that you want to use for standard HTTP requests. https_proxy The proxy that you want to use for HTTPs requests. no_proxy A comma-separated list of domains that you want to exclude from proxy communications. The system-wide proxy method has the following limitations: The maximum length of no_proxy is 1024 characters due to a fixed size buffer in the pam_env PAM module. Some containers bind and parse the environment variables in /etc/environments incorrectly, which causes problems when running these services. For more information, see BZ#1916070 - proxy configuration updates in /etc/environment files are not being picked up in containers correctly and BZ#1918408 - mistral_executor container fails to properly set no_proxy environment parameter . dnf proxy configuration Use this method to configure dnf to run all traffic through a proxy. To configure the proxy settings, edit the /etc/dnf/dnf.conf file and set the following parameters: proxy The URL of the proxy server. proxy_username The username that you want to use to connect to the proxy server. proxy_password The password that you want to use to connect to the proxy server. proxy_auth_method The authentication method used by the proxy server. For more information about these options, run man dnf.conf . The dnf proxy method has the following limitations: This method provides proxy support only for dnf . The dnf proxy method does not include an option to exclude certain hosts from proxy communication. Red Hat Subscription Manager proxy Use this method to configure Red Hat Subscription Manager to run all traffic through a proxy. To configure the proxy settings, edit the /etc/rhsm/rhsm.conf file and set the following parameters: proxy_hostname Host for the proxy. proxy_scheme The scheme for the proxy when writing out the proxy to repo definitions. proxy_port The port for the proxy. proxy_username The username that you want to use to connect to the proxy server. proxy_password The password to use for connecting to the proxy server. no_proxy A comma-separated list of hostname suffixes for specific hosts that you want to exclude from proxy communication. For more information about these options, run man rhsm.conf . The Red Hat Subscription Manager proxy method has the following limitations: This method provides proxy support only for Red Hat Subscription Manager. The values for the Red Hat Subscription Manager proxy configuration override any values set for the system-wide environment variables. Transparent proxy If your network uses a transparent proxy to manage application layer traffic, you do not need to configure the undercloud itself to interact with the proxy because proxy management occurs automatically. A transparent proxy can help overcome limitations associated with client-based proxy configuration in Red Hat OpenStack Platform. 2.8. Undercloud repositories RHOSP (RHOSP) 16.2 runs on Red Hat Enterprise Linux (RHEL) 8.4. As a result, you must lock the content from these repositories to the respective RHEL version. Note If you synchronize repositories by using Red Hat Satellite, you can enable specific versions of the RHEL repositories. However, the repository label remains the same despite the version you choose. For example, if you enable the 8.4 version of the BaseOS repository, the repository name includes the specific version that you enabled, but the repository label is still rhel-8-for-x86_64-baseos-eus-rpms . The advanced-virt-for-rhel-8-x86_64-rpms and advanced-virt-for-rhel-8-x86_64-eus-rpms repositories are no longer required. To disable these repositories, see the Red Hat Knowledgebase solution advanced-virt-for-rhel-8-x86_64-rpms are no longer required in OSP 16.2 . Warning Any repositories outside the ones specified here are not supported. Unless recommended, do not enable any other products or repositories outside the ones listed in the following tables or else you might encounter package dependency issues. Do not enable Extra Packages for Enterprise Linux (EPEL). Core repositories The following table lists core repositories for installing the undercloud. Name Repository Description of requirement Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) Extended Update Support (EUS) rhel-8-for-x86_64-baseos-eus-rpms Base operating system repository for x86_64 systems. Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) rhel-8-for-x86_64-appstream-eus-rpms Contains RHOSP dependencies. Red Hat Enterprise Linux 8 for x86_64 - High Availability (RPMs) Extended Update Support (EUS) rhel-8-for-x86_64-highavailability-eus-rpms High availability tools for RHEL. Used for Controller node high availability. Red Hat Ansible Engine 2.9 for RHEL 8 x86_64 (RPMs) ansible-2.9-for-rhel-8-x86_64-rpms Ansible Engine for RHEL. Used to provide the latest version of Ansible. RHOSP 16.2 for RHEL 8 (RPMs) openstack-16.2-for-rhel-8-x86_64-rpms Core RHOSP repository, which contains packages for RHOSP director. Red Hat Fast Datapath for RHEL 8 (RPMS) fast-datapath-for-rhel-8-x86_64-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. Ceph repositories The following table lists Ceph Storage related repositories for the undercloud. Name Repository Description of Requirement Red Hat Ceph Storage Tools 4 for RHEL 8 x86_64 (RPMs) rhceph-4-tools-for-rhel-8-x86_64-rpms Provides tools for nodes to communicate with the Ceph Storage cluster. The undercloud requires the ceph-ansible package from this repository if you plan to use Ceph Storage in your overcloud or if you want to integrate with an existing Ceph Storage cluster. IBM POWER repositories The following table contains a list of repositories for RHOSP on POWER PC architecture. Use these repositories in place of equivalents in the Core repositories. Name Repository Description of requirement Red Hat Enterprise Linux for IBM Power, little endian - BaseOS (RPMs) rhel-8-for-ppc64le-baseos-rpms Base operating system repository for ppc64le systems. Red Hat Enterprise Linux 8 for IBM Power, little endian - AppStream (RPMs) rhel-8-for-ppc64le-appstream-rpms Contains RHOSP dependencies. Red Hat Enterprise Linux 8 for IBM Power, little endian - High Availability (RPMs) rhel-8-for-ppc64le-highavailability-rpms High availability tools for RHEL. Used for Controller node high availability. Red Hat Fast Datapath for RHEL 8 IBM Power, little endian (RPMS) fast-datapath-for-rhel-8-ppc64le-rpms Provides Open vSwitch (OVS) packages for OpenStack Platform. Red Hat Ansible Engine 2.9 for RHEL 8 IBM Power, little endian (RPMs) ansible-2.9-for-rhel-8-ppc64le-rpms Ansible Engine for RHEL. Provides the latest version of Ansible. Red Hat OpenStack Platform 16.2 for RHEL 8 (RPMs) openstack-16.2-for-rhel-8-ppc64le-rpms Core RHOSP repository for ppc64le systems. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/director_installation_and_usage/assembly_planning-your-undercloud |
22.2. Enabling Tracking of Last Successful Kerberos Authentication | 22.2. Enabling Tracking of Last Successful Kerberos Authentication For performance reasons, IdM running on Red Hat Enterprise Linux 7.4 and later does not store the time stamp of the last successful Kerberos authentication of a user. As a consequence, certain commands, such as ipa user-status do not display the time stamp. To enable tracking of the last successful Kerberos authentication of a user: Display the currently enabled password plug-in features: You require the names of the features, except KDC:Disable Last Success , in the following step. Pass the --ipaconfigstring= feature parameter for every feature to the ipa config-mod command that is currently enabled, except for KDC:Disable Last Success : This command enables only the AllowNThash plug-in. To enable multiple features, specify the --ipaconfigstring= feature parameter multiple times. For example, to enable the AllowNThash and KDC:Disable Lockout feature: Restart IdM: | [
"ipa config-show | grep \"Password plugin features\" Password plugin features: AllowNThash , KDC:Disable Last Success",
"ipa config-mod --ipaconfigstring='AllowNThash'",
"ipa config-mod --ipaconfigstring='AllowNThash' --ipaconfigstring='KDC:Disable Lockout'",
"ipactl restart"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/enabling-tracking-of-last-successful-kerberos-authentication |
Chapter 9. Host management and monitoring by using the RHEL web console | Chapter 9. Host management and monitoring by using the RHEL web console You can use the RHEL web console interactive web interface to perform actions and monitor Red Hat Enterprise Linux hosts. You can enable a remote-execution feature to integrate Satellite with the RHEL web console. When you install the RHEL web console on a host that you manage with Satellite, you can view the RHEL web console dashboards of that host from within the Satellite web UI. You can also use the features that are integrated with the RHEL web console, for example, Red Hat Image Builder. 9.1. Enabling the RHEL web console on Satellite By default, RHEL web console integration is disabled in Satellite. If you want to access RHEL web console features for your hosts from within Satellite, you must first enable the RHEL web console on Satellite Server. Procedure Enable the RHEL web console on your Satellite Server: 9.2. Managing and monitoring hosts using the RHEL web console You can access the RHEL web console UI through the Satellite web UI and use the functionality to manage and monitor hosts in Satellite. Prerequisites You have enabled the RHEL web console in Satellite. You have installed the RHEL web console on the host that you want to manage and monitor. Satellite provides a job template named Service Action - Enable Web Console under the Ansible Services job category that you can use to install the RHEL web console. For more information about running remote jobs, see Chapter 13, Configuring and setting up remote jobs . Satellite or Capsule can authenticate to the host with SSH keys. For more information, see Section 13.14, "Distributing SSH keys for remote execution" . Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the host that you want to manage and monitor with the RHEL web console. In the upper right of the host window, click the vertical ellipsis and select Web Console . You can now access the full range of features available for host monitoring and management, for example, Red Hat Image Builder, through the RHEL web console. Additional resources For more information about using the RHEL web console, see the following documents: Managing systems using the RHEL 9 web console Managing systems using the RHEL 8 web console Managing systems using the RHEL 7 web console For more information about using Red Hat Image Builder through the RHEL web console, see the following documents: Accessing the RHEL image builder dashboard in the RHEL web console in Composing a customized RHEL system image (RHEL 9) Accessing the RHEL image builder dashboard in the RHEL web console in Composing a customized RHEL system image (RHEL 8) Accessing Image Builder GUI in the RHEL 7 web console in Image Builder Guide (RHEL 7) 9.3. Disabling the RHEL web console on Satellite Perform the following procedure if you want to disable the RHEL web console on Satellite. Procedure Disable the RHEL web console on your Satellite Server: Important You can enable or disable RHEL web console integration independently on Capsule Servers. To prevent enabling RHEL web console integration on a Capsule Server, enter the following command after completing the procedure: | [
"satellite-installer --enable-foreman-plugin-remote-execution-cockpit --reset-foreman-plugin-remote-execution-cockpit-ensure",
"satellite-installer --foreman-plugin-remote-execution-cockpit-ensure absent",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-cockpit-integration false"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_hosts/host-management-and-monitoring-by-using-cockpit |
Chapter 7. Managing credentials | Chapter 7. Managing credentials Credentials authenticate the controller user to launch Ansible playbooks. The passwords and SSH keys are used to authenticate against inventory hosts. By using the credentials feature of automation controller, you can require the automation controller user to enter a password or key phrase when a playbook launches. 7.1. Creating new credentials As part of the initial setup, a demonstration credential and a Galaxy credential have been created for your use. Use the Galaxy credential as a template. It can be copied, but not edited. You can add more credentials as necessary. Procedure From the navigation panel, select Resources Credentials . To add a new credential, see Creating a credential in the Automation controller User Guide . Note When you set up additional credentials, the user you assign must have root access or be able to use SSH to connect to the host machine. Click Demo Credential to view its details. 7.2. Editing a credential As part of the initial setup, you can leave the default Demo Credential as it is, and you can edit it later. Procedure Edit the credential by using one of these methods: Go to the credential Details page and click Edit . From the navigation panel, select Resources Credentials . Click Edit to the credential name and edit the appropriate details. Save your changes. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_automation_controller/controller-credentials |
Users and Identity Management Guide | Users and Identity Management Guide Red Hat OpenStack Platform 16.0 Managing users and keystone authentication OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/users_and_identity_management_guide/index |
function::error | function::error Name function::error - Send an error message Synopsis Arguments msg The formatted message string Description An implicit end-of-line is added. staprun prepends the string " ERROR: " . Sending an error message aborts the currently running probe. Depending on the MAXERRORS parameter, it may trigger an exit . | [
"error(msg:string)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-error |
Chapter 78. task | Chapter 78. task This chapter describes the commands under the task command. 78.1. task execution list List all tasks. Usage: Table 78.1. Positional Arguments Value Summary workflow_execution Workflow execution id associated with list of tasks. Table 78.2. Optional Arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. --oldest Display the executions starting from the oldest entries instead of the newest Table 78.3. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 78.4. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 78.5. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 78.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 78.2. task execution published show Show task published variables. Usage: Table 78.7. Positional Arguments Value Summary id Task id Table 78.8. Optional Arguments Value Summary -h, --help Show this help message and exit 78.3. task execution rerun Rerun an existing task. Usage: Table 78.9. Positional Arguments Value Summary id Task identifier Table 78.10. Optional Arguments Value Summary -h, --help Show this help message and exit --resume Rerun only failed or unstarted action executions for with-items task -e ENV, --env ENV Environment variables Table 78.11. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 78.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 78.13. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 78.14. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 78.4. task execution result show Show task output data. Usage: Table 78.15. Positional Arguments Value Summary id Task id Table 78.16. Optional Arguments Value Summary -h, --help Show this help message and exit 78.5. task execution show Show specific task. Usage: Table 78.17. Positional Arguments Value Summary task Task identifier Table 78.18. Optional Arguments Value Summary -h, --help Show this help message and exit Table 78.19. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 78.20. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 78.21. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 78.22. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack task execution list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] [--oldest] [workflow_execution]",
"openstack task execution published show [-h] id",
"openstack task execution rerun [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--resume] [-e ENV] id",
"openstack task execution result show [-h] id",
"openstack task execution show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] task"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/task |
8.52. ghostscript | 8.52. ghostscript 8.52.1. RHBA-2013:1624 - ghostscript bug fix update Updated ghostscript packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The Ghostscript suite contains utilities for rendering PostScript and PDF documents. Ghostscript translates PostScript code to common, bitmap formats so that the code can be displayed or printed. Bug Fixes BZ# 893775 Due to a bug in a function that copies CID-keyed Type 2 fonts, document conversion attempts sometimes caused the ps2pdf utility to terminate unexpectedly with a segmentation fault. A patch has been provided to address this bug so that the function now copies fonts properly and ps2pdf no longer crashes when converting documents. BZ# 916162 Due to lack of support for the TPGDON option for JBIG2 encoded regions, some PDF files were not displayed correctly. A patch has been provided to add this support so that PDF files using the TPGDON option are now displayed correctly. BZ# 1006165 Previously, some PDF files with incomplete ASCII base-85 encoded images caused the ghostscript utility to terminate with the following error: /syntaxerror in ID The problem occurred when the image ended with "~" (tilde) instead of "~>" (tilde, right angle bracket) as defined in the PDF specification. Although this is an improper encoding, an upstream patch has been applied, and ghostscript now handles these PDF files without errors. Users of ghostscript are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/ghostscript |
Chapter 1. Preparing to deploy OpenShift Data Foundation | Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of OpenShift Data Foundation, follow these steps: Setup a chrony server. See Configuring chrony time service and use knowledgebase solution to create rules allowing all traffic. Optional: If you want to enable cluster-wide encryption using an external Key Management System (KMS): Ensure that a policy with a token exists and the key value backend path in Vault is enabled. See enabled the key value backend path and policy in Vault . Ensure that you are using signed certificates on your Vault servers. Minimum starting node requirements [Technology Preview] An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Regional-DR requirements [Developer Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription For detailed requirements, see Regional-DR requirements and RHACM requirements . 1.1. Enabling key value backend path and policy in Vault Prerequisites Administrator access to Vault. Carefully, choose a unique path name as the backend path that follows the naming convention since it cannot be changed later. Procedure Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict users to perform a write or delete operation on the secret using the following commands. Create a token matching the above policy. | [
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_microsoft_azure/preparing_to_deploy_openshift_data_foundation |
Chapter 5. Configuring the web console in OpenShift Container Platform | Chapter 5. Configuring the web console in OpenShift Container Platform You can modify the OpenShift Container Platform web console to set a logout redirect URL or disable the quick start tutorials. 5.1. Prerequisites Deploy an OpenShift Container Platform cluster. 5.2. Configuring the web console You can configure the web console settings by editing the console.config.openshift.io resource. Edit the console.config.openshift.io resource: USD oc edit console.config.openshift.io cluster The following example displays the sample resource definition for the console: apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: "" 1 status: consoleURL: "" 2 1 Specify the URL of the page to load when a user logs out of the web console. If you do not specify a value, the user returns to the login page for the web console. Specifying a logoutRedirect URL allows your users to perform single logout (SLO) through the identity provider to destroy their single sign-on session. 2 The web console URL. To update this to a custom value, see Customizing the web console URL . 5.3. Disabling quick starts in the web console You can use the Administrator perspective of the web console to disable one or more quick starts. Prerequisites You have cluster administrator permissions and are logged in to the web console. Procedure In the Administrator perspective, navigate to Administation Cluster Settings . On the Cluster Settings page, click the Configuration tab. On the Configuration page, click the Console configuration resource with the description operator.openshift.io . From the Action drop-down list, select Customize , which opens the Cluster configuration page. On the General tab, in the Quick starts section, you can select items in either the Enabled or Disabled list, and move them from one list to the other by using the arrow buttons. To enable or disable a single quick start, click the quick start, then use the single arrow buttons to move the quick start to the appropriate list. To enable or disable multiple quick starts at once, press Ctrl and click the quick starts you want to move. Then, use the single arrow buttons to move the quick starts to the appropriate list. To enable or disable all quick starts at once, click the double arrow buttons to move all of the quick starts to the appropriate list. | [
"oc edit console.config.openshift.io cluster",
"apiVersion: config.openshift.io/v1 kind: Console metadata: name: cluster spec: authentication: logoutRedirect: \"\" 1 status: consoleURL: \"\" 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/web_console/configuring-web-console |
Chapter 193. Kie-Camel | Chapter 193. Kie-Camel 193.1. Overview The kie-camel component is an Apache Camel endpoint provided by Red Hat Fuse that integrates Fuse with Red Hat Process Automation Manager. It enables you to specify a Red Hat Process Automation Manager module by using a Maven group ID, artifact ID, and version (GAV) identifier which you can pull into the route and execute. It also enables you to specify portions of the message body as facts. You can use the kie-camel component with embedded engines or with Process Server. For more details about the kie-camel component, see Integrating Red Hat Fuse with Red Hat Process Automation Manager . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/kie_camel |
Chapter 4. Compatibility matrix for Red Hat Ceph Storage 4.3 | Chapter 4. Compatibility matrix for Red Hat Ceph Storage 4.3 The following tables list products and their versions compatible with Red Hat Ceph Storage (RHCS) 4.3. Host Operating System Version Notes Red Hat Enterprise Linux 7.9, 8.4 EUS, 8.5 and 8.6 Included in the product Important All nodes in the cluster and their clients must use the supported OS version(s) to ensure that the version of the ceph package is the same on all nodes. Using different versions of the ceph package is not supported. Important Red Hat no longer supports using Ubuntu as a host operating system to deploy Red Hat Ceph Storage. Product Version Notes Ansible 2.9 Included in the product Red Hat OpenShift 3.0 and later 3.x versions The RBD driver is supported starting with version 3.0 and Cinder driver is supported starting with version 3.1. Red Hat OpenShift 4 is not supported. Red Hat OpenStack Platform 13, 15 and 16 Red Hat OpenStack Platform 11, 12 and 14 is not supported. Red Hat OpenStack Platform 10 and Red Hat Ceph Storage 4 are not a tested combination. Director 13 deployed Red Hat Ceph Storage 3.1+ also supports external Red Hat Ceph Storage 4. Director 16 deployed Red Hat Ceph Storage 4.3 also supports external Red Hat Ceph Storage 5. Red Hat Satellite 6.x Only registering with the Content Delivery Network (CDN) is supported. Registering with Red Hat Network (RHN) is deprecated and not supported. Red Hat OpenShift Container Storage 4.6 Red Hat Ceph Storage 4.2 and later versions are supported for Red Hat OpenShift Storage 4.6 external mode. Client Connector Version Notes S3A 2.8.x, 3.2.x, and trunk Red Hat Enterprise Linux iSCSI Initiator The latest versions of the iscsi-initiator-utils and device-mapper-multipath packages RHCS as a backup target Version Notes CommVault Cloud Data Management v11 IBM Spectrum Protect Plus 10.1.5 IBM Spectrum Protect server 8.1.8 NetApp AltaVault 4.3.2 and 4.4 Rubrik Cloud Data Management (CDM) 3.2 and later Trilio, TrilioVault 3.0 S3 target Veeam (object storage) Veeam Availability Suite 9.5 Update 4 Supported on Red Hat Ceph Storage object storage with the S3 protocol Veritas NetBackup for Symantec OpenStorage (OST) cloud backup 7.7 and 8.0 Independent Software vendors Version Notes IBM Spectrum Discover 2.0.3 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/compatibility_guide/compatibility-matrix-for-red-hat-ceph-storage-4-3 |
Chapter 13. Configuring seccomp profiles | Chapter 13. Configuring seccomp profiles An OpenShift Container Platform container or a pod runs a single application that performs one or more well-defined tasks. The application usually requires only a small subset of the underlying operating system kernel APIs. Secure computing mode, seccomp, is a Linux kernel feature that can be used to limit the process running in a container to only using a subset of the available system calls. The restricted-v2 SCC applies to all newly created pods in 4.17. The default seccomp profile runtime/default is applied to these pods. Seccomp profiles are stored as JSON files on the disk. Important Seccomp profiles cannot be applied to privileged containers. 13.1. Verifying the default seccomp profile applied to a pod OpenShift Container Platform ships with a default seccomp profile that is referenced as runtime/default . In 4.17, newly created pods have the Security Context Constraint (SCC) set to restricted-v2 and the default seccomp profile applies to the pod. Procedure You can verify the Security Context Constraint (SCC) and the default seccomp profile set on a pod by running the following commands: Verify what pods are running in the namespace: USD oc get pods -n <namespace> For example, to verify what pods are running in the workshop namespace run the following: USD oc get pods -n workshop Example output NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s Inspect the pods: USD oc get pod parksmap-1-4xkwf -n workshop -o yaml Example output apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.131.0.18" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.131.0.18" ], "default": true, "dns": {} }] openshift.io/deployment-config.latest-version: "1" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2 1 The restricted-v2 SCC is added by default if your workload does not have access to a different SCC. 2 Newly created pods in 4.17 will have the seccomp profile configured to runtime/default as mandated by the SCC. 13.1.1. Upgraded cluster In clusters upgraded to 4.17 all authenticated users have access to the restricted and restricted-v2 SCC. A workload admitted by the SCC restricted for example, on a OpenShift Container Platform v4.10 cluster when upgraded may get admitted by restricted-v2 . This is because restricted-v2 is the more restrictive SCC between restricted and restricted-v2 . Note The workload must be able to run with retricted-v2 . Conversely with a workload that requires privilegeEscalation: true this workload will continue to have the restricted SCC available for any authenticated user. This is because restricted-v2 does not allow privilegeEscalation . 13.1.2. Newly installed cluster For newly installed OpenShift Container Platform 4.11 or later clusters, the restricted-v2 replaces the restricted SCC as an SCC that is available to be used by any authenticated user. A workload with privilegeEscalation: true , is not admitted into the cluster since restricted-v2 is the only SCC available for authenticated users by default. The feature privilegeEscalation is allowed by restricted but not by restricted-v2 . More features are denied by restricted-v2 than were allowed by restricted SCC. A workload with privilegeEscalation: true may be admitted into a newly installed OpenShift Container Platform 4.11 or later cluster. To give access to the restricted SCC to the ServiceAccount running the workload (or any other SCC that can admit this workload) using a RoleBinding run the following command: USD oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name> In OpenShift Container Platform 4.17 the ability to add the pod annotations seccomp.security.alpha.kubernetes.io/pod: runtime/default and container.seccomp.security.alpha.kubernetes.io/<container_name>: runtime/default is deprecated. 13.2. Configuring a custom seccomp profile You can configure a custom seccomp profile, which allows you to update the filters based on the application requirements. This allows cluster administrators to have greater control over the security of workloads running in OpenShift Container Platform. Seccomp security profiles list the system calls (syscalls) a process can make. Permissions are broader than SELinux, which restrict operations, such as write , system-wide. 13.2.1. Creating seccomp profiles You can use the MachineConfig object to create profiles. Seccomp can restrict system calls (syscalls) within a container, limiting the access of your application. Prerequisites You have cluster admin permissions. You have created a custom security context constraints (SCC). For more information, see Additional resources . Procedure Create the MachineConfig object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json 13.2.2. Setting up the custom seccomp profile Prerequisite You have cluster administrator permissions. You have created a custom security context constraints (SCC). For more information, see "Additional resources". You have created a custom seccomp profile. Procedure Upload your custom seccomp profile to /var/lib/kubelet/seccomp/<custom-name>.json by using the Machine Config. See "Additional resources" for detailed steps. Update the custom SCC by providing reference to the created custom seccomp profile: seccompProfiles: - localhost/<custom-name>.json 1 1 Provide the name of your custom seccomp profile. 13.2.3. Applying the custom seccomp profile to the workload Prerequisite The cluster administrator has set up the custom seccomp profile. For more details, see "Setting up the custom seccomp profile". Procedure Apply the seccomp profile to the workload by setting the securityContext.seccompProfile.type field as following: Example spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1 1 Provide the name of your custom seccomp profile. Alternatively, you can use the pod annotations seccomp.security.alpha.kubernetes.io/pod: localhost/<custom-name>.json . However, this method is deprecated in OpenShift Container Platform 4.17. During deployment, the admission controller validates the following: The annotations against the current SCCs allowed by the user role. The SCC, which includes the seccomp profile, is allowed for the pod. If the SCC is allowed for the pod, the kubelet runs the pod with the specified seccomp profile. Important Ensure that the seccomp profile is deployed to all worker nodes. Note The custom SCC must have the appropriate priority to be automatically assigned to the pod or meet other conditions required by the pod, such as allowing CAP_NET_ADMIN. 13.3. Additional resources Managing security context constraints Machine Config Overview | [
"oc get pods -n <namespace>",
"oc get pods -n workshop",
"NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s",
"oc get pod parksmap-1-4xkwf -n workshop -o yaml",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] openshift.io/deployment-config.latest-version: \"1\" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2",
"oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json",
"seccompProfiles: - localhost/<custom-name>.json 1",
"spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/security_and_compliance/seccomp-profiles |
Chapter 4. Integrating Serverless with OpenShift Pipelines | Chapter 4. Integrating Serverless with OpenShift Pipelines Integrating Serverless with OpenShift Pipelines enables CI/CD pipeline management for Serverless services. Using this integration, you can automate the deployment of your Serverless services. 4.1. Prerequisites You have access to the cluster with cluster-admin privileges. The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the OpenShift Pipelines Operator on the cluster. 4.2. Creating a service deployed by OpenShift Pipelines Using the OpenShift Container Platform web console, you can create a service that the OpenShift Pipelines deploys. Procedure In the OpenShift Container Platform web console Developer perspective, navigate to +Add and select the Import from Git option. In the Import from Git dialog, specify project metadata by doing the following: Specify the Git repository URL. If necessary, specify the context directory. This is the subdirectory inside the repository that contains the root of application source code. Optional: Specify the application name. By default, the repository name is used. Select the Serverless Deployment resource type. Select the Add pipeline checkbox. The pipeline is automatically selected based on the source code and its visualization is shown on the scheme. Specify any other relevant settings. Click Create to create the service. After the service creation starts, you are navigated to the Topology screen, where your service and the related trigger are visualized and where you can interact with them. Optional: Verify that the pipeline has been created and that the service is being built and deployed by navigating to the Pipelines page: To see the details of the pipeline, click the pipeline on the Pipelines page. To see the details about the current pipeline run, click the name of the run on the Pipelines page. 4.3. Additional resources Documentation for Red Hat OpenShift Pipelines | null | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/integrations/serverless-pipelines-integration |
Chapter 9. Installing a cluster on GCP into a shared VPC | Chapter 9. Installing a cluster on GCP into a shared VPC In OpenShift Container Platform version 4.13, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP). In this installation method, the cluster is configured to use a VPC from a different GCP project. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IP addresses from that network. For more information about shared VPC, see Shared VPC overview in the GCP documentation . The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . You have a GCP host project which contains a shared VPC network. You configured a GCP project to host the cluster. This project, known as the service project, must be attached to the host project. For more information, see Attaching service projects in the GCP documentation . You have a GCP service account that has the required GCP permissions in both the host and service projects. 9.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 9.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 9.5. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) into a shared VPC, you must generate the install-config.yaml file and modify it so that the cluster uses the correct VPC networks, DNS zones, and project names. 9.5.1. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 9.5.2. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 9.5.3. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Important Confidential Computing is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important Due to a known issue in OpenShift Container Platform 4.13.3 and earlier versions, you cannot use persistent volume storage on a cluster with Confidential VMs on Google Cloud Platform (GCP). This issue was resolved in OpenShift Container Platform 4.13.4. For more information, see OCPBUGS-11768 . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 9.5.4. Sample customized install-config.yaml file for shared VPC installation There are several configuration parameters which are required to install OpenShift Container Platform on GCP using a shared VPC. The following is a sample install-config.yaml file which demonstrates these fields. Important This sample YAML file is provided for reference only. You must modify this file with the correct values for your environment and cluster. apiVersion: v1 baseDomain: example.com credentialsMode: Passthrough 1 metadata: name: cluster_name platform: gcp: computeSubnet: shared-vpc-subnet-1 2 controlPlaneSubnet: shared-vpc-subnet-2 3 network: shared-vpc 4 networkProjectID: host-project-name 5 projectID: service-project-name 6 region: us-east1 defaultMachinePlatform: tags: 7 - global-tag1 controlPlane: name: master platform: gcp: tags: 8 - control-plane-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: - name: worker platform: gcp: tags: 9 - compute-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 10 1 credentialsMode must be set to Passthrough or Manual . See the "Prerequisites" section for the required GCP permissions that your service account must have. 2 The name of the subnet in the shared VPC for compute machines to use. 3 The name of the subnet in the shared VPC for control plane machines to use. 4 The name of the shared VPC. 5 The name of the host project where the shared VPC exists. 6 The name of the GCP project where you want to install the cluster. 7 8 9 Optional. One or more network tags to apply to compute machines, control plane machines, or all machines. 10 You can optionally provide the sshKey value that you use to access the machines in your cluster. 9.5.5. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 9.5.5.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 9.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 9.5.5.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 9.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 9.5.5.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 9.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough or Manual . Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Note If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough or Manual . Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 9.5.5.4. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 9.4. Additional GCP parameters Parameter Description Values platform.gcp.network The name of the existing Virtual Private Cloud (VPC) where you want to deploy your cluster. If you want to deploy your cluster into a shared VPC, you must set platform.gcp.networkProjectID with the name of the GCP project that contains the shared VPC. String. platform.gcp.networkProjectID Optional. The name of the GCP project that contains the shared VPC where you want to deploy your cluster. String. platform.gcp.projectID The name of the GCP project where the installation program installs the cluster. String. platform.gcp.region The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . platform.gcp.controlPlaneSubnet The name of the existing subnet where you want to deploy your control plane machines. The subnet name. platform.gcp.computeSubnet The name of the existing subnet where you want to deploy your compute machines. The subnet name. platform.gcp.licenses A list of license URLs that must be applied to the compute images. Important The licenses parameter is a deprecated field and nested virtualization is enabled by default. It is not recommended to use this field. Any license available with the license API , such as the license to enable nested virtualization . You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installation program to copy the source image before use. platform.gcp.defaultMachinePlatform.zones The availability zones where the installation program creates machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . platform.gcp.defaultMachinePlatform.osDisk.diskSizeGB The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. platform.gcp.defaultMachinePlatform.osDisk.diskType The GCP disk type . Either the default pd-ssd or the pd-standard disk type. The control plane nodes must be the pd-ssd disk type. Compute nodes can be either type. platform.gcp.defaultMachinePlatform.osImage.project Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot control plane and compute machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for both types of machines. String. The name of GCP project where the image is located. platform.gcp.defaultMachinePlatform.osImage.name The name of the custom RHCOS image for the installation program to use to boot control plane and compute machines. If you use platform.gcp.defaultMachinePlatform.osImage.project , this field is required. String. The name of the RHCOS image. platform.gcp.defaultMachinePlatform.tags Optional. Additional network tags to add to the control plane and compute machines. One or more strings, for example network-tag1 . platform.gcp.defaultMachinePlatform.type The GCP machine type for control plane and compute machines. The GCP machine type, for example n1-standard-4 . platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for machine disk encryption. The encryption key name. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.keyRing The name of the Key Management Service (KMS) key ring to which the KMS key belongs. The KMS key ring name. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.location The GCP location in which the KMS key ring exists. The GCP location. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKey.projectID The ID of the project in which the KMS key ring exists. This value defaults to the value of the platform.gcp.projectID parameter if it is not set. The GCP project ID. platform.gcp.defaultMachinePlatform.osDisk.encryptionKey.kmsKeyServiceAccount The GCP service account used for the encryption request for control plane and compute machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . platform.gcp.defaultMachinePlatform.secureBoot Whether to enable Shielded VM secure boot for all machines in the cluster. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . platform.gcp.defaultMachinePlatform.confidentialCompute Whether to use Confidential VMs for all machines in the cluster. Confidential VMs provide encryption for data during processing. For more information on Confidential computing, see Google's documentation on Confidential computing . Enabled or Disabled . The default value is Disabled . platform.gcp.defaultMachinePlatform.onHostMaintenance Specifies the behavior of all VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.location For control plane machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKeyServiceAccount The GCP service account used for the encryption request for control plane machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . controlPlane.platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). This value applies to control plane machines. Any integer between 16 and 65536. controlPlane.platform.gcp.osDisk.diskType The GCP disk type for control plane machines. Control plane machines must use the pd-ssd disk type, which is the default. controlPlane.platform.gcp.osImage.project Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for control plane machines only. String. The name of GCP project where the image is located. controlPlane.platform.gcp.osImage.name The name of the custom RHCOS image for the installation program to use to boot control plane machines. If you use controlPlane.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. controlPlane.platform.gcp.tags Optional. Additional network tags to add to the control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for control plane machines. One or more strings, for example control-plane-tag1 . controlPlane.platform.gcp.type The GCP machine type for control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . controlPlane.platform.gcp.zones The availability zones where the installation program creates control plane machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . controlPlane.platform.gcp.secureBoot Whether to enable Shielded VM secure boot for control plane machines. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . controlPlane.platform.gcp.confidentialCompute Whether to enable Confidential VMs for control plane machines. Confidential VMs provide encryption for data while it is being processed. For more information on Confidential VMs, see Google's documentation on Confidential Computing . Enabled or Disabled . The default value is Disabled . controlPlane.platform.gcp.onHostMaintenance Specifies the behavior of control plane VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . compute.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.location For compute machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. compute.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. compute.platform.gcp.osDisk.encryptionKey.kmsKeyServiceAccount The GCP service account used for the encryption request for compute machines. If this value is not set, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . compute.platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). This value applies to compute machines. Any integer between 16 and 65536. compute.platform.gcp.osDisk.diskType The GCP disk type for compute machines. Either the default pd-ssd or the pd-standard disk type. compute.platform.gcp.osImage.project Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot compute machines. You can override the default behavior by specifying the location of a custom RHCOS image for the installation program to use for compute machines only. String. The name of GCP project where the image is located. compute.platform.gcp.osImage.name The name of the custom RHCOS image for the installation program to use to boot compute machines. If you use compute.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. compute.platform.gcp.tags Optional. Additional network tags to add to the compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for compute machines. One or more strings, for example compute-network-tag1 . compute.platform.gcp.type The GCP machine type for compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . compute.platform.gcp.zones The availability zones where the installation program creates compute machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . compute.platform.gcp.secureBoot Whether to enable Shielded VM secure boot for compute machines. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . compute.platform.gcp.confidentialCompute Whether to enable Confidential VMs for compute machines. Confidential VMs provide encryption for data while it is being processed. For more information on Confidential VMs, see Google's documentation on Confidential Computing . Enabled or Disabled . The default value is Disabled . compute.platform.gcp.onHostMaintenance Specifies the behavior of compute VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . 9.5.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 9.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 9.8. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 9.9. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 9.10. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com credentialsMode: Passthrough 1 metadata: name: cluster_name platform: gcp: computeSubnet: shared-vpc-subnet-1 2 controlPlaneSubnet: shared-vpc-subnet-2 3 network: shared-vpc 4 networkProjectID: host-project-name 5 projectID: service-project-name 6 region: us-east1 defaultMachinePlatform: tags: 7 - global-tag1 controlPlane: name: master platform: gcp: tags: 8 - control-plane-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: - name: worker platform: gcp: tags: 9 - compute-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA... 10",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_gcp/installing-gcp-shared-vpc |
11. Entitlement | 11. Entitlement Red Hat Subscription Manager and the Subscription Service Effective software and infrastructure management requires a mechanism to handle the software inventory - both the type of products and the number of systems that the software is installed on. In parallel with Red Hat Enterprise Linux 6.1, Red Hat is introducing a new subscription service which provides oversight for the software subscriptions for an organization and a more effective content delivery system. On local systems, the new Red Hat Subscription Manager offers both GUI and command-line tools to manage the local system and its allocated subscriptions. A better method to handle subscriptions will help our customers allocate their subscriptions more effectively and will make installing and updating Red Hat products much simpler. In Red Hat Enterprise Linux 6.0 and 5.6 and older, subscriptions were based on access to channels and were assigned to an organization as a whole. Starting in Red Hat Enterprise Linux 6.1, subscriptions are based on installed products and are assigned to systems individually. This provides clear and delineated control over the products used by and subscribed to by a specific system. As part of the new subscription structure, the Customer Portal provides two paths to manage subscriptions: Certificate-based Red Hat Network, which uses the new subscription service, and RHN Classic, which uses the traditional channels. Systems must be managed either by the new Certificate-based Red Hat Network or by RHN Classic, but not both. If a system was previously managed by RHN Classic, there is no direct, supported migration path from RHN Classic to Certificate-based Red Hat Network. Note The Red Hat Enterprise Linux 6 Subscription Management Guide contains further information on managing subscriptions. The Red Hat Enterprise Linux 6 Installation Guide contains further information on the registration and subscription process during firstboot and kickstart. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_release_notes/ar01s11 |
function::task_euid | function::task_euid Name function::task_euid - The effective user identifier of the task. Synopsis Arguments task task_struct pointer. General Syntax task_euid:long(task:long) Description This function returns the effective user id of the given task. | [
"function task_euid:long(task:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-euid |
Chapter 6. Adding Storage for Red Hat Virtualization | Chapter 6. Adding Storage for Red Hat Virtualization Add storage as data domains in the new environment. A Red Hat Virtualization environment must have at least one data domain, but adding more is recommended. Add the storage you prepared earlier: NFS iSCSI Fibre Channel (FCP) POSIX-compliant file system Local storage Red Hat Gluster Storage 6.1. Adding NFS Storage This procedure shows you how to attach existing NFS storage to your Red Hat Virtualization environment as a data domain. If you require an ISO or export domain, use this procedure, but select ISO or Export from the Domain Function list. Procedure In the Administration Portal, click Storage Domains . Click New Domain . Enter a Name for the storage domain. Accept the default values for the Data Center , Domain Function , Storage Type , Format , and Host lists. Enter the Export Path to be used for the storage domain. The export path should be in the format of 123.123.0.10:/data (for IPv4), [2001:0:0:0:0:0:0:5db1]:/data (for IPv6), or domain.example.com:/data . Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK . The new NFS data domain has a status of Locked until the disk is prepared. The data domain is then automatically attached to the data center. 6.2. Adding iSCSI Storage This procedure shows you how to attach existing iSCSI storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the new storage domain. Select a Data Center from the drop-down list. Select Data as the Domain Function and iSCSI as the Storage Type . Select an active host as the Host . Important Communication to the storage domain is from the selected host and not directly from the Manager. Therefore, all hosts must have access to the storage device before the storage domain can be configured. The Manager can map iSCSI targets to LUNs or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when the iSCSI storage type is selected. If the target that you are using to add storage does not appear, you can use target discovery to find it; otherwise proceed to the step. Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment. Note LUNs used externally to the environment are also displayed. You can use the Discover Targets options to add LUNs on many targets or multiple paths to the same LUNs. Enter the FQDN or IP address of the iSCSI host in the Address field. Enter the port with which to connect to the host when browsing for targets in the Port field. The default is 3260 . If CHAP is used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password . Note You can define credentials for an iSCSI target for a specific host with the REST API. See StorageServerConnectionExtensions: add in the REST API Guide for more information. Click Discover . Select one or more targets from the discovery results and click Login for one target or Login All for multiple targets. Important If more than one path access is required, you must discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported. Click the + button to the desired target. This expands the entry and displays all unused LUNs attached to the target. Select the check box for each LUN that you are using to create the storage domain. Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . If you have configured multiple storage connection paths to the same target, follow the procedure in Configuring iSCSI Multipathing to complete iSCSI bonding. If you want to migrate your current storage network to an iSCSI bond, see Migrating a Logical Network to an iSCSI Bond . 6.3. Adding FCP Storage This procedure shows you how to attach existing FCP storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the storage domain. Select an FCP Data Center from the drop-down list. If you do not yet have an appropriate FCP data center, select (none) . Select the Domain Function and the Storage Type from the drop-down lists. The storage domain types that are not compatible with the chosen data center are not available. Select an active host in the Host field. If this is not the first data domain in a data center, you must select the data center's SPM host. Important All communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured. The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . The new FCP data domain remains in a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center. 6.4. Adding POSIX-compliant File System Storage This procedure shows you how to attach existing POSIX-compliant file system storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name for the storage domain. Select the Data Center to be associated with the storage domain. The data center selected must be of type POSIX (POSIX compliant FS) . Alternatively, select (none) . Select Data from the Domain Function drop-down list, and POSIX compliant FS from the Storage Type drop-down list. If applicable, select the Format from the drop-down menu. Select a host from the Host drop-down list. Enter the Path to the POSIX file system, as you would normally provide it to the mount command. Enter the VFS Type , as you would normally provide it to the mount command using the -t argument. See man mount for a list of valid VFS types. Enter additional Mount Options , as you would normally provide them to the mount command using the -o argument. The mount options should be provided in a comma-separated list. See man mount for a list of valid mount options. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value in the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value in the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK . 6.5. Adding Local Storage Adding local storage to a host places the host in a new data center and cluster. The local storage configuration window combines the creation of a data center, a cluster, and storage into a single process. Procedure Click Compute Hosts and select the host. Click Management Maintenance and click OK . Click Management Configure Local Storage . Click the Edit buttons to the Data Center , Cluster , and Storage fields to configure and name the local storage domain. Set the path to your local storage in the text entry field. If applicable, click the Optimization tab to configure the memory optimization policy for the new local storage cluster. Click OK . Your host comes online in a data center of its own. 6.6. Adding Red Hat Gluster Storage To use Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see https://access.redhat.com/articles/2356261 . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/Adding_Storage_Domains_to_RHV_SM_localDB_deploy |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.18/making-open-source-more-inclusive |
API overview | API overview OpenShift Container Platform 4.18 Overview content for the OpenShift Container Platform API Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/api_overview/index |
Adding and configuring APIs | Adding and configuring APIs Red Hat OpenShift API Management 1 Adding and configuring APIs in Red Hat OpenShift API Management. Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_openshift_api_management/1/html/adding_and_configuring_apis/index |
Data Grid Cross-Site Replication | Data Grid Cross-Site Replication Red Hat Data Grid 8.4 Back up data between Data Grid clusters Red Hat Customer Content Services | [
"<distributed-cache> <backups> <backup site=\"NYC\" strategy=\"ASYNC\" timeout=\"10000\" /> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"NYC\" : { \"backup\" : { \"strategy\" : \"ASYNC\", \"timeout\" : \"10000\" } } } } }",
"distributedCache: backups: NYC: backup: strategy: \"ASYNC\" timeout: \"10000\"",
"<distributed-cache> <backups> <backup site=\"NYC\" strategy=\"ASYNC\" timeout=\"10000\"> <take-offline after-failures=\"5\"/> </backup> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"NYC\" : { \"backup\" : { \"strategy\" : \"ASYNC\", \"timeout\" : \"10000\", \"take-offline\" : { \"after-failures\" : \"5\" } } } } } }",
"distributedCache: backups: NYC: backup: strategy: \"ASYNC\" timeout: \"10000\" takeOffline: afterFailures: \"5\"",
"<take-offline after-failures=\"-1\" min-wait=\"10000\"/>",
"<distributed-cache> <backups> <backup site=\"NYC\" strategy=\"ASYNC\" timeout=\"10000\"> <take-offline after-failures=\"5\" min-wait=\"15000\"/> </backup> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"NYC\" : { \"backup\" : { \"strategy\" : \"ASYNC\", \"timeout\" : \"10000\", \"take-offline\" : { \"after-failures\" : \"5\", \"min-wait\" : \"15000\" } } } } } }",
"distributedCache: backups: NYC: backup: strategy: \"ASYNC\" timeout: \"10000\" takeOffline: afterFailures: \"5\" minWait: \"15000\"",
"LON NYC k1=(n/a) 0,0 0,0 k1=2 1,0 --> 1,0 k1=2 k1=3 1,1 <-- 1,1 k1=3 k1=5 2,1 1,2 k1=8 --> 2,1 (conflict) (conflict) 1,2 <--",
"<infinispan> <jgroups> <stack name=\"xsite\" extends=\"udp\"> <relay.RELAY2 xmlns=\"urn:org:jgroups\" site=\"LON\" max_site_masters=\"1000\"/> <remote-sites default-stack=\"tcp\"> <remote-site name=\"LON\"/> <remote-site name=\"NYC\"/> </remote-sites> </stack> </jgroups> <cache-container> <transport cluster=\"USD{cluster.name}\" stack=\"xsite\"/> </cache-container> </infinispan>",
"<infinispan> <jgroups> <stack name=\"relay-global\" extends=\"tcp\"> <TCPPING initial_hosts=\"192.0.2.0[7800]\" stack.combine=\"REPLACE\" stack.position=\"MPING\"/> </stack> <stack name=\"xsite\" extends=\"udp\"> <relay.RELAY2 site=\"LON\" xmlns=\"urn:org:jgroups\" max_site_masters=\"10\" can_become_site_master=\"true\"/> <remote-sites default-stack=\"relay-global\"> <remote-site name=\"LON\"/> <remote-site name=\"NYC\"/> </remote-sites> </stack> </jgroups> </infinispan>",
"<replicated-cache name=\"customers\"> <backups> <backup site=\"NYC\" strategy=\"ASYNC\" /> </backups> </replicated-cache>",
"{ \"replicated-cache\": { \"name\": \"customers\", \"backups\": { \"NYC\": { \"backup\" : { \"strategy\" : \"ASYNC\" } } } } }",
"replicatedCache: name: \"customers\" backups: NYC: backup: strategy: \"ASYNC\"",
"<distributed-cache name=\"customers\"> <backups> <backup site=\"LON\" strategy=\"ASYNC\" /> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"name\": \"customers\", \"backups\": { \"LON\": { \"backup\": { \"strategy\": \"ASYNC\" } } } } }",
"distributedCache: name: \"customers\" backups: LON: backup: strategy: \"ASYNC\"",
"<distributed-cache name=\"eu-customers\"> <backups> <backup site=\"LON\" strategy=\"ASYNC\" /> </backups> <backup-for remote-cache=\"customers\" remote-site=\"LON\" /> </distributed-cache>",
"{ \"distributed-cache\": { \"name\": \"eu-customers\", \"backups\": { \"LON\": { \"backup\": { \"strategy\": \"ASYNC\" } } }, \"backup-for\" : { \"remote-cache\" : \"customers\", \"remote-site\" : \"LON\" } } }",
"distributedCache: name: \"eu-customers\" backups: LON: backup: strategy: \"ASYNC\" backupFor: remoteCache: \"customers\" remoteSite: \"LON\"",
"<distributed-cache name=\"eu-customers\"> <backups> <backup site=\"LON\" strategy=\"ASYNC\"> <state-transfer chunk-size=\"600\" timeout=\"2400000\" max-retries=\"30\" wait-time=\"2000\" mode=\"AUTO\"/> </backup> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"name\": \"eu-customers\", \"backups\": { \"LON\": { \"backup\": { \"strategy\": \"ASYNC\", \"state-transfer\": { \"chunk-size\": \"600\", \"timeout\": \"2400000\", \"max-retries\": \"30\", \"wait-time\": \"2000\", \"mode\": \"AUTO\" } } } } } }",
"distributedCache: name: \"eu-customers\" backups: LON: backup: strategy: \"ASYNC\" stateTransfer: chunkSize: \"600\" timeout: \"2400000\" maxRetries: \"30\" waitTime: \"2000\" mode: \"AUTO\"",
"<distributed-cache> <backups merge-policy=\"ALWAYS_REMOVE\"> <backup site=\"LON\" strategy=\"ASYNC\"/> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"merge-policy\": \"ALWAYS_REMOVE\", \"LON\": { \"backup\": { \"strategy\": \"ASYNC\" } } } } }",
"distributedCache: backups: mergePolicy: \"ALWAYS_REMOVE\" LON: backup: strategy: \"ASYNC\"",
"<distributed-cache> <backups merge-policy=\"org.mycompany.MyCustomXSiteEntryMergePolicy\"> <backup site=\"LON\" strategy=\"ASYNC\"/> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"merge-policy\": \"org.mycompany.MyCustomXSiteEntryMergePolicy\", \"LON\": { \"backup\": { \"strategy\": \"ASYNC\" } } } } }",
"distributedCache: backups: mergePolicy: \"org.mycompany.MyCustomXSiteEntryMergePolicy\" LON: backup: strategy: \"ASYNC\"",
"<distributed-cache> <backups tombstone-map-size=\"512000\" max-cleanup-delay=\"30000\"> <backup site=\"LON\" strategy=\"ASYNC\"/> </backups> </distributed-cache>",
"{ \"distributed-cache\": { \"backups\": { \"tombstone-map-size\": 512000, \"max-cleanup-delay\": 30000, \"LON\": { \"backup\": { \"strategy\": \"ASYNC\" } } } } }",
"distributedCache: backups: tombstoneMapSize: 512000 maxCleanupDelay: 30000 LON: backup: strategy: \"ASYNC\"",
"INFO [org.infinispan.XSITE] (jgroups-5,<server-hostname>) ISPN000439: Received new x-site view: [NYC] INFO [org.infinispan.XSITE] (jgroups-7,<server-hostname>) ISPN000439: Received new x-site view: [LON, NYC]",
"Servers at the active site infinispan.client.hotrod.server_list = LON_host1:11222,LON_host2:11222,LON_host3:11222 Servers at the backup site infinispan.client.hotrod.cluster.NYC = NYC_hostA:11222,NYC_hostB:11222,NYC_hostC:11222,NYC_hostD:11222",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServers(\"LON_host1:11222;LON_host2:11222;LON_host3:11222\") .addCluster(\"NYC\") .addClusterNodes(\"NYC_hostA:11222;NYC_hostB:11222;NYC_hostC:11222;NYC_hostD:11222\")",
"site status --cache=cacheName --site=NYC",
"site bring-online --cache=customers --site=NYC",
"site take-offline --cache=customers --site=NYC",
"site state-transfer-mode get --cache=cacheName --site=NYC",
"site state-transfer-mode set --cache=cacheName --site=NYC --mode=AUTO",
"site push-site-state --cache=cacheName --site=NYC",
"GET /rest/v2/caches/{cacheName}/x-site/backups/",
"{ \"NYC\": { \"status\": \"online\" }, \"LON\": { \"status\": \"mixed\", \"online\": [ \"NodeA\" ], \"offline\": [ \"NodeB\" ] } }",
"GET /rest/v2/caches/{cacheName}/x-site/backups/{siteName}",
"{ \"NodeA\":\"offline\", \"NodeB\":\"online\" }",
"POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=take-offline",
"POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=bring-online",
"POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=start-push-state",
"POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=cancel-push-state",
"GET /rest/v2/caches/{cacheName}/x-site/backups?action=push-state-status",
"{ \"NYC\":\"CANCELED\", \"LON\":\"OK\" }",
"POST /rest/v2/caches/{cacheName}/x-site/local?action=clear-push-state-status",
"GET /rest/v2/caches/{cacheName}/x-site/backups/{siteName}/take-offline-config",
"{ \"after_failures\": 2, \"min_wait\": 1000 }",
"PUT /rest/v2/caches/{cacheName}/x-site/backups/{siteName}/take-offline-config",
"POST /rest/v2/caches/{cacheName}/x-site/backups/{siteName}?action=cancel-receive-state",
"GET /rest/v2/cache-managers/{cacheManagerName}/x-site/backups/",
"{ \"SFO-3\":{ \"status\":\"online\" }, \"NYC-2\":{ \"status\":\"mixed\", \"online\":[ \"CACHE_1\" ], \"offline\":[ \"CACHE_2\" ], \"mixed\": [ \"CACHE_3\" ] } }",
"GET /rest/v2/cache-managers/{cacheManagerName}/x-site/backups/{site}",
"POST /rest/v2/cache-managers/{cacheManagerName}/x-site/backups/{siteName}?action=take-offline",
"POST /rest/v2/cache-managers/{cacheManagerName}/x-site/backups/{siteName}?action=bring-online",
"GET /rest/v2/caches/{cacheName}/x-site/backups/{site}/state-transfer-mode",
"POST /rest/v2/caches/{cacheName}/x-site/backups/{site}/state-transfer-mode?action=set&mode={mode}",
"POST /rest/v2/cache-managers/{cacheManagerName}/x-site/backups/{siteName}?action=start-push-state",
"POST /rest/v2/cache-managers/{cacheManagerName}/x-site/backups/{siteName}?action=cancel-push-state",
"<infinispan> <cache-container statistics=\"true\"> <jmx enabled=\"true\" domain=\"example.com\"/> </cache-container> </infinispan>",
"{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"jmx\" : { \"enabled\" : \"true\", \"domain\" : \"example.com\" } } } }",
"infinispan: cacheContainer: statistics: \"true\" jmx: enabled: \"true\" domain: \"example.com\""
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html-single/data_grid_cross-site_replication/index |
8.182. polkit-gnome | 8.182. polkit-gnome 8.182.1. RHBA-2014:0425 - polkit-gnome bug fix update Updated polkit-gnome packages that fix one bug are now available for Red Hat Enterprise Linux 6. The polkit-gnome packages provide an authentication agent for the polkit authentication manager, which is an application-level toolkit for defining and handling the policy that allows non-privileged processes communicate with privileged ones. * Due to a bug in the source code, the authentication dialog of the polkit GNOME authentication manager could send an invalid time stamp to the window manager when the dialog was displayed for the first time. Consequently, the dialog did not receive focus for keyboard input, and the input was sent to the previously-focused window instead. This bug has been fixed, and valid time stamps are now obtained and sent to the window manager. As a result, keyboard input is always sent to the displayed authentication dialog as expected. (BZ# 872918 ) Bug Fix BZ# 872918 Due to a bug in the source code, the authentication dialog of the polkit GNOME authentication manager could send an invalid time stamp to the window manager when the dialog was displayed for the first time. Consequently, the dialog did not receive focus for keyboard input, and the input was sent to the previously-focused window instead. This bug has been fixed, and valid time stamps are now obtained and sent to the window manager. As a result, keyboard input is always sent to the displayed authentication dialog as expected. Users of polkit-gnome are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/polkit-gnome |
Chapter 1. Red Hat build of OpenJDK overview | Chapter 1. Red Hat build of OpenJDK overview OpenJDK is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is based on the upstream OpenJDK 8u, OpenJDK 11u, and OpenJDK 17u projects and includes the Shenandoah Garbage Collector in all versions. Multi-platform - The Red Hat build of OpenJDK is now supported on Windows and RHEL. This helps you standardize on a single Java platform across desktop, datacenter, and hybrid cloud. Frequent releases - Red Hat delivers quarterly updates of JRE and JDK for the Red Hat build of OpenJDK 8, Red Hat build of OpenJDK 11, and Red Hat build of OpenJDK 17 distributions. These are available as rpm , portables, msi , zip files and containers. Long-term support - Red Hat supports the recently released Red Hat build of OpenJDK 8, Red Hat build of OpenJDK 11, and Red Hat build of OpenJDK 17 distributions. For more information about the support lifecycle, see OpenJDK Life Cycle and Support Policy . Java Web Start - Red Hat build of OpenJDK supports Java Web Start for RHEL. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/getting_started_with_red_hat_build_of_openjdk_8/openjdk-overview |
16.8. Storage Devices | 16.8. Storage Devices You can install Red Hat Enterprise Linux on a large variety of storage devices. This screen allows you to select either basic or specialized storage devices. Figure 16.4. Storage devices Basic Storage Devices Select Basic Storage Devices to install Red Hat Enterprise Linux on the following storage devices: hard drives or solid-state drives connected directly to the local system. Specialized Storage Devices Select Specialized Storage Devices to install Red Hat Enterprise Linux on the following storage devices: Storage area networks (SANs) Direct access storage devices (DASDs) Firmware RAID devices Multipath devices Use the Specialized Storage Devices option to configure Internet Small Computer System Interface (iSCSI) and FCoE (Fiber Channel over Ethernet) connections. If you select Basic Storage Devices , anaconda automatically detects the local storage attached to the system and does not require further input from you. Proceed to Section 16.9, "Setting the Hostname" . Note Monitoring of LVM and software RAID devices by the mdeventd daemon is not performed during installation. 16.8.1. The Storage Devices Selection Screen The storage devices selection screen displays all storage devices to which anaconda has access. Figure 16.5. Select storage devices - Basic devices Figure 16.6. Select storage devices - Multipath Devices Figure 16.7. Select storage devices - Other SAN Devices Devices are grouped under the following tabs: Basic Devices Basic storage devices directly connected to the local system, such as hard disk drives and solid-state drives. Firmware RAID Storage devices attached to a firmware RAID controller. Multipath Devices Storage devices accessible through more than one path, such as through multiple SCSI controllers or Fiber Channel ports on the same system. Important The installer only detects multipath storage devices with serial numbers that are 16 or 32 characters in length. Other SAN Devices Any other devices available on a storage area network (SAN). If you do need to configure iSCSI or FCoE storage, click Add Advanced Target and refer to Section 16.8.1.1, " Advanced Storage Options " . The storage devices selection screen also contains a Search tab that allows you to filter storage devices either by their World Wide Identifier (WWID) or by the port, target, or logical unit number (LUN) at which they are accessed. Figure 16.8. The Storage Devices Search Tab The tab contains a drop-down menu to select searching by port, target, WWID, or LUN (with corresponding text boxes for these values). Searching by WWID or LUN requires additional values in the corresponding text box. Each tab presents a list of devices detected by anaconda , with information about the device to help you to identify it. A small drop-down menu marked with an icon is located to the right of the column headings. This menu allows you to select the types of data presented on each device. For example, the menu on the Multipath Devices tab allows you to specify any of WWID , Capacity , Vendor , Interconnect , and Paths to include among the details presented for each device. Reducing or expanding the amount of information presented might help you to identify particular devices. Figure 16.9. Selecting Columns Each device is presented on a separate row, with a checkbox to its left. Click the checkbox to make a device available during the installation process, or click the radio button at the left of the column headings to select or deselect all the devices listed in a particular screen. Later in the installation process, you can choose to install Red Hat Enterprise Linux onto any of the devices selected here, and can choose to automatically mount any of the other devices selected here as part of the installed system. Note that the devices that you select here are not automatically erased by the installation process. Selecting a device on this screen does not, in itself, place data stored on the device at risk. Note also that any devices that you do not select here to form part of the installed system can be added to the system after installation by modifying the /etc/fstab file. Important Any storage devices that you do not select on this screen are hidden from anaconda entirely. To chain load the Red Hat Enterprise Linux boot loader from a different boot loader, select all the devices presented in this screen. when you have selected the storage devices to make available during installation, click and proceed to Section 16.13, "Initializing the Hard Disk" 16.8.1.1. Advanced Storage Options From this screen you can configure an iSCSI (SCSI over TCP/IP) target or FCoE (Fibre channel over ethernet) SAN (storage area network). Refer to Appendix B, iSCSI Disks for an introduction to iSCSI. Figure 16.10. Advanced Storage Options Select Add iSCSI target or Add FCoE SAN and click Add drive . If adding an iSCSI target, optionally check the box labeled Bind targets to network interfaces . 16.8.1.1.1. Select and configure a network interface The Advanced Storage Options screen lists the active network interfaces anaconda has found on your system. If none are found, anaconda must activate an interface through which to connect to the storage devices. Click Configure Network on the Advanced Storage Options screen to configure and activate one using NetworkManager to use during installation. Alternatively, anaconda will prompt you with the Select network interface dialog after you click Add drive . Figure 16.11. Select network interface Select an interface from the drop-down menu. Click OK . Anaconda then starts NetworkManager to allow you to configure the interface. Figure 16.12. Network Connections For details of how to use NetworkManager , refer to Section 16.9, "Setting the Hostname" 16.8.1.1.2. Configure iSCSI parameters To add an iSCSI target, select Add iSCSI target and click Add drive . To use iSCSI storage devices for the installation, anaconda must be able to discover them as iSCSI targets and be able to create an iSCSI session to access them. Each of these steps might require a username and password for CHAP (Challenge Handshake Authentication Protocol) authentication. Additionally, you can configure an iSCSI target to authenticate the iSCSI initiator on the system to which the target is attached ( reverse CHAP ), both for discovery and for the session. Used together, CHAP and reverse CHAP are called mutual CHAP or two-way CHAP . Mutual CHAP provides the greatest level of security for iSCSI connections, particularly if the username and password are different for CHAP authentication and reverse CHAP authentication. Repeat the iSCSI discovery and iSCSI login steps as many times as necessary to add all required iSCSI storage. However, you cannot change the name of the iSCSI initiator after you attempt discovery for the first time. To change the iSCSI initiator name, you must restart the installation. Procedure 16.1. iSCSI discovery Use the iSCSI Discovery Details dialog to provide anaconda with the information that it needs to discover the iSCSI target. Figure 16.13. The iSCSI Discovery Details dialog Enter the IP address of the iSCSI target in the Target IP Address field. Provide a name in the iSCSI Initiator Name field for the iSCSI initiator in iSCSI qualified name (IQN) format. A valid IQN contains: the string iqn. (note the period) a date code that specifies the year and month in which your organization's Internet domain or subdomain name was registered, represented as four digits for the year, a dash, and two digits for the month, followed by a period. For example, represent September 2010 as 2010-09. your organization's Internet domain or subdomain name, presented in reverse order with the top-level domain first. For example, represent the subdomain storage.example.com as com.example.storage a colon followed by a string that uniquely identifies this particular iSCSI initiator within your domain or subdomain. For example, :diskarrays-sn-a8675309 . A complete IQN therefore resembles: iqn.2010-09.storage.example.com:diskarrays-sn-a8675309 , and anaconda pre-populates the iSCSI Initiator Name field with a name in this format to help you with the structure. For more information on IQNs, refer to 3.2.6. iSCSI Names in RFC 3720 - Internet Small Computer Systems Interface (iSCSI) available from http://tools.ietf.org/html/rfc3720#section-3.2.6 and 1. iSCSI Names and Addresses in RFC 3721 - Internet Small Computer Systems Interface (iSCSI) Naming and Discovery available from http://tools.ietf.org/html/rfc3721#section-1 . Use the drop-down menu to specify the type of authentication to use for iSCSI discovery: Figure 16.14. iSCSI discovery authentication no credentials CHAP pair CHAP pair and a reverse pair If you selected CHAP pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password fields. Figure 16.15. CHAP pair If you selected CHAP pair and a reverse pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password field and the username and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Figure 16.16. CHAP pair and a reverse pair Click Start Discovery . Anaconda attempts to discover an iSCSI target based on the information that you provided. If discovery succeeds, the iSCSI Discovered Nodes dialog presents you with a list of all the iSCSI nodes discovered on the target. Each node is presented with a checkbox beside it. Click the checkboxes to select the nodes to use for installation. Figure 16.17. The iSCSI Discovered Nodes dialog Click Login to initiate an iSCSI session. Procedure 16.2. Starting an iSCSI session Use the iSCSI Nodes Login dialog to provide anaconda with the information that it needs to log into the nodes on the iSCSI target and start an iSCSI session. Figure 16.18. The iSCSI Nodes Login dialog Use the drop-down menu to specify the type of authentication to use for the iSCSI session: Figure 16.19. iSCSI session authentication no credentials CHAP pair CHAP pair and a reverse pair Use the credentials from the discovery step If your environment uses the same type of authentication and same username and password for iSCSI discovery and for the iSCSI session, select Use the credentials from the discovery step to reuse these credentials. If you selected CHAP pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password fields. Figure 16.20. CHAP pair If you selected CHAP pair and a reverse pair as the authentication type, provide the username and password for the iSCSI target in the CHAP Username and CHAP Password fields and the username and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Figure 16.21. CHAP pair and a reverse pair Click Login . Anaconda attempts to log into the nodes on the iSCSI target based on the information that you provided. The iSCSI Login Results dialog presents you with the results. Figure 16.22. The iSCSI Login Results dialog Click OK to continue. 16.8.1.1.3. Configure FCoE Parameters To configure an FCoE SAN, select Add FCoE SAN and click Add Drive . In the dialog box that appears after you click Add drive , select the network interface that is connected to your FCoE switch and click Add FCoE Disk(s) . Figure 16.23. Configure FCoE Parameters Data Center Bridging (DCB) is a set of enhancements to the Ethernet protocols designed to increase the efficiency of Ethernet connections in storage networks and clusters. Enable or disable the installer's awareness of DCB with the checkbox in this dialog. This should only be set for networking interfaces that require a host-based DCBX client. Configurations on interfaces that implement a hardware DCBX client should leave this checkbox empty. Auto VLAN indicates whether VLAN discovery should be performed. If this box is checked, then the FIP VLAN discovery protocol will run on the Ethernet interface once the link configuration has been validated. If they are not already configured, network interfaces for any discovered FCoE VLANs will be automatically created and FCoE instances will be created on the VLAN interfaces. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/Storage_Devices-ppc |
Chapter 19. Managing user access | Chapter 19. Managing user access 19.1. Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes Red Hat Advanced Cluster Security for Kubernetes (RHACS) comes with role-based access control (RBAC) that you can use to configure roles and grant various levels of access to Red Hat Advanced Cluster Security for Kubernetes for different users. Beginning with version 3.63, RHACS includes a scoped access control feature that enables you to configure fine-grained and specific sets of permissions that define how a given RHACS user or a group of users can interact with RHACS, which resources they can access, and which actions they can perform. Roles are a collection of permission sets and access scopes. You can assign roles to users and groups by specifying rules. You can configure these rules when you configure an authentication provider. There are two types of roles in Red Hat Advanced Cluster Security for Kubernetes: System roles that are created by Red Hat and cannot be changed. Custom roles, which Red Hat Advanced Cluster Security for Kubernetes administrators can create and change at any time. Note If you assign multiple roles for a user, they get access to the combined permissions of the assigned roles. If you have users assigned to a custom role, and you delete that role, all associated users transfer to the minimum access role that you have configured. Permission sets are a set of permissions that define what actions a role can perform on a given resource. Resources are the functionalities of Red Hat Advanced Cluster Security for Kubernetes for which you can set view ( read ) and modify ( write ) permissions. There are two types of permission sets in Red Hat Advanced Cluster Security for Kubernetes: System permission sets, which are created by Red Hat and cannot be changed. Custom permission sets, which Red Hat Advanced Cluster Security for Kubernetes administrators can create and change at any time. Access scopes are a set of Kubernetes and OpenShift Container Platform resources that users can access. For example, you can define an access scope that only allows users to access information about pods in a given project. There are two types of access scopes in Red Hat Advanced Cluster Security for Kubernetes: System access scopes, which are created by Red Hat and cannot be changed. Custom access scopes, which Red Hat Advanced Cluster Security for Kubernetes administrators can create and change at any time. 19.1.1. System roles Red Hat Advanced Cluster Security for Kubernetes (RHACS) includes some default system roles that you can apply to users when you create rules. You can also create custom roles as required. System role Description Admin This role is targeted for administrators. Use it to provide read and write access to all resources. Analyst This role is targeted for a user who cannot make any changes, but can view everything. Use it to provide read-only access for all resources. Continuous Integration This role is targeted for CI (continuous integration) systems and includes the permission set required to enforce deployment policies. Network Graph Viewer This role is targeted for users who need to view the network graph. None This role has no read and write access to any resource. You can set this role as the minimum access role for all users. Sensor Creator RHACS uses this role to automate new cluster setups. It includes the permission set to create Sensors in secured clusters. Vulnerability Management Approver This role allows you to provide access to approve vulnerability deferrals or false positive requests. Vulnerability Management Requester This role allows you to provide access to request vulnerability deferrals or false positives. Vulnerability Report Creator This role allows you to create and manage vulnerability reporting configurations for scheduled vulnerability reports. 19.1.1.1. Viewing the permission set and access scope for a system role You can view the permission set and access scope for the default system roles. Procedure In the RHACS portal, go to Platform Configuration Access control . Select Roles . Click on one of the roles to view its details. The details page shows the permission set and access scope for the slected role. Note You cannot modify permission set and access scope for the default system roles. 19.1.1.2. Creating a custom role You can create new roles from the Access Control view. Prerequisites You must have the Admin role, or read and write permissions for the Access resource to create, modify, and delete custom roles. You must create a permissions set and an access scope for the custom role before creating the role. Procedure In the RHACS portal, go to Platform Configuration Access Control . Select Roles . Click Create role . Enter a Name and Description for the new role. Select a Permission set for the role. Select an Access scope for the role. Click Save . Additional resources Creating a custom permission set Creating a custom access scope 19.1.1.3. Assigning a role to a user or a group You can use the RHACS portal to assign roles to a user or a group. Procedure In the RHACS portal, go to Platform Configuration Access Control . From the list of authentication providers, select the authentication provider. Click Edit minimum role and rules . Under the Rules section, click Add new rule . For Key , select one of the values from userid , name , email or group . For Value , enter the value of the user ID, name, email address or group based on the key you selected. Click the Role drop-down menu and select the role you want to assign. Click Save . You can repeat these instructions for each user or group and assign different roles. 19.1.2. System permission sets Red Hat Advanced Cluster Security for Kubernetes includes some default system permission sets that you can apply to roles. You can also create custom permission sets as required. Permission set Description Admin Provides read and write access to all resources. Analyst Provides read-only access for all resources. Continuous Integration This permission set is targeted for CI (continuous integration) systems and includes the permissions required to enforce deployment policies. Network Graph Viewer Provides the minimum permissions to view network graphs. None No read and write permissions are allowed for any resource. Sensor Creator Provides permissions for resources that are required to create Sensors in secured clusters. 19.1.2.1. Viewing the permissions for a system permission set You can view the permissions for a system permission set in the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration Access control . Select Permission sets . Click on one of the permission sets to view its details. The details page shows a list of resources and their permissions for the selected permission set. Note You cannot modify permissions for a system permission set. 19.1.2.2. Creating a custom permission set You can create new permission sets from the Access Control view. Prerequisites You must have the Admin role, or read and write permissions for the Access resource to create, modify, and delete permission sets. Procedure In the RHACS portal, go to Platform Configuration Access Control . Select Permission sets . Click Create permission set . Enter a Name and Description for the new permission set. For each resource, under the Access level column, select one of the permissions from No access , Read access , or Read and Write access . Warning If you are configuring a permission set for users, you must grant read-only permissions for the following resources: Alert Cluster Deployment Image NetworkPolicy NetworkGraph WorkflowAdministration Secret These permissions are preselected when you create a new permission set. If you do not grant these permissions, users will experience issues with viewing pages in the RHACS portal. Click Save . 19.1.3. System access scopes Red Hat Advanced Cluster Security for Kubernetes includes some default system access scopes that you can apply on roles. You can also create custom access scopes as required. Acces scope Description Unrestricted Provides access to all clusters and namespaces that Red Hat Advanced Cluster Security for Kubernetes monitors. Deny All Provides no access to any Kubernetes and OpenShift Container Platform resources. 19.1.3.1. Viewing the details for a system access scope You can view the Kubernetes and OpenShift Container Platform resources that are allowed and not allowed for an access scope in the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration Access control . Select Access scopes . Click on one of the access scopes to view its details. The details page shows a list of clusters and namespaces, and which ones are allowed for the selected access scope. Note You cannot modify allowed resources for a system access scope. 19.1.3.2. Creating a custom access scope You can create new access scopes from the Access Control view. Prerequisites You must have the Admin role, or a role with the permission set with read and write permissions for the Access resource to create, modify, and delete permission sets. Procedure In the RHACS portal, go to Platform Configuration Access control . Select Access scopes . Click Create access scope . Enter a Name and Description for the new access scope. Under the Allowed resources section: Use the Cluster filter and Namespace filter fields to filter the list of clusters and namespaces visible in the list. Expand the Cluster name to see the list of namespaces in that cluster. To allow access to all namespaces in a cluster, toggle the switch in the Manual selection column. Note Access to a specific cluster provides users with access to the following resources within the scope of the cluster: OpenShift Container Platform or Kubernetes cluster metadata and security information Compliance information for authorized clusters Node metadata and security information Access to all namespaces in that cluster and their associated security information To allow access to a namespace, toggle the switch in the Manual selection column for a namespace. Note Access to a specific namespace gives access to the following information within the scope of the namespace: Alerts and violations for deployments Vulnerability data for images Deployment metadata and security information Role and user information Network graph, policy, and baseline information for deployments Process information and process baseline configuration Prioritized risk information for each deployment If you want to allow access to clusters and namespaces based on labels, click Add label selector under the Label selection rules section. Then click Add rule to specify Key and Value pairs for the label selector. You can specify labels for clusters and namespaces. Click Save . 19.1.4. Resource definitions Red Hat Advanced Cluster Security for Kubernetes includes many resources. The following table lists the Red Hat Advanced Cluster Security for Kubernetes resources and describes the actions that users can perform with the read or write permission. Note To prevent privilege escalation, when you create a new token, your role's permissions limit the permission you can assign to that token. For example, if you only have read permission for the Integration resource, you cannot create a token with write permission. If you want a custom role to create tokens for other users to use, you must assign the required permissions to that custom role. Use short-lived tokens for machine-to-machine communication, such as CI/CD pipelines, scripts, and other automation. Also, use the roxctl central login command for human-to-machine communication, such as roxctl CLI or API access. The majority of cloud service providers support OIDC identity tokens, for example, Microsoft Entra ID, Google Cloud Identity Platform, and AWS Cognito. OIDC identity tokens issued by these services can be used for RHACS short-lived access. Resource Read permission Write permission Access View configurations for single sign-on (SSO) and role-based access control (RBAC) rules that match user metadata to Red Hat Advanced Cluster Security for Kubernetes roles and users that have accessed your Red Hat Advanced Cluster Security for Kubernetes instance, including the metadata that the authentication providers give about them. Create, modify, or delete SSO configurations and configured RBAC rules. Administration View the following items: Options for data retention, security notices and other related configurations The current logging verbosity level in Red Hat Advanced Cluster Security for Kubernetes components Manifest content for the uploaded probe files Existing image scanner integrations The status of automatic upgrades Metadata about Red Hat Advanced Cluster Security for Kubernetes service-to-service authentication The content of the scanner bundle (download) Edit the following items: Data retention, security notices, and related configurations The logging level Support packages in Central (upload) Image scanner integrations (create/modify/delete) Automatic upgrades for secured clusters (enable/disable) Service-to-service authentication credentials (revoke/re-issue) Alert View existing policy violations. Resolve or edit policy violations. CVE Internal use only Internal use only Cluster View existing secured clusters. Add new secured clusters and modify or delete existing clusters. Compliance View compliance standards and results, recent compliance runs, and the associated completion status. Trigger compliance runs. Deployment View deployments (workloads) in secured clusters. N/A DeploymentExtension View the following items: Process baselines Process activity in deployments Risk results Modify the following items: Process baselines (add or remove processes) Detection Check build-time policies against images or deployment YAML. N/A Image View images, their components, and their vulnerabilities. N/A Integration View integrations and their configuration, including backup, registry, image signature, notification systems, and API tokens. Add, modify, and delete integrations and their configurations, and API tokens. K8sRole View roles for Kubernetes RBAC in secured clusters. N/A K8sRoleBinding View role bindings for Kubernetes RBAC in secured clusters. N/A K8sSubject View users and groups for Kubernetes RBAC in secured clusters. N/A Namespace View existing Kubernetes namespaces in secured clusters. N/A NetworkGraph View active and allowed network connections in secured clusters. N/A NetworkPolicy View existing network policies in secured clusters and simulate changes. Apply network policy changes in secured clusters. Node View existing Kubernetes nodes in secured clusters. N/A WorkflowAdministration View all resource collections. Add, modify, or delete resource collections. Role View existing Red Hat Advanced Cluster Security for Kubernetes RBAC roles and their permissions. Add, modify, or delete roles and their permissions. Secret View metadata about secrets in secured clusters. N/A ServiceAccount List Kubernetes service accounts in secured clusters. N/A VulnerabilityManagementApprovals View all pending deferral or false positive requests for vulnerabilities. Approve or deny any pending deferral or false positive requests and move any previously approved requests back to observed. VulnerabilityManagementRequests View all pending deferral or false positive requests for vulnerabilities. Request a deferral on a vulnerability, mark it as a false positive, or move a pending or previously approved request made by the same user back to observed. WatchedImage View undeployed and monitored watched images. Configure watched images. WorkflowAdministration View all resource collections. Create, modify, or delete resource collections. 19.1.5. Declarative configuration for authentication and authorization resources You can use declarative configuration for authentication and authorization resources such as authentication providers, roles, permission sets, and access scopes. For instructions on how to use declarative configuration, see "Using declarative configuration" in the "Additional resources" section. Additional resources Using declarative configuration 19.2. Enabling PKI authentication If you use an enterprise certificate authority (CA) for authentication, you can configure Red Hat Advanced Cluster Security for Kubernetes (RHACS) to authenticate users by using their personal certificates. After you configure PKI authentication, users and API clients can log in using their personal certificates. Users without certificates can still use other authentication options, including API tokens, the local administrator password, or other authentication providers. PKI authentication is available on the same port number as the Web UI, gRPC, and REST APIs. When you configure PKI authentication, by default, Red Hat Advanced Cluster Security for Kubernetes uses the same port for PKI, web UI, gRPC, other single sign-on (SSO) providers, and REST APIs. You can also configure a separate port for PKI authentication by using a YAML configuration file to configure and expose endpoints. 19.2.1. Configuring PKI authentication by using the RHACS portal You can configure Public Key Infrastructure (PKI) authentication by using the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration Access Control . Click Create Auth Provider and select User Certificates from the drop-down list. In the Name field, specify a name for this authentication provider. In the CA certificate(s) (PEM) field, paste your root CA certificate in PEM format. Assign a Minimum access role for users who access RHACS using PKI authentication. A user must have the permissions granted to this role or a role with higher permissions to log in to RHACS. Tip For security, Red Hat recommends first setting the Minimum access role to None while you complete setup. Later, you can return to the Access Control page to set up more tailored access rules based on user metadata from your identity provider. To add access rules for users and groups accessing RHACS, click Add new rule in the Rules section. For example, to give the Admin role to a user called administrator , you can use the following key-value pairs to create access rules: Key Value Name administrator Role Admin Click Save . 19.2.2. Configuring PKI authentication by using the roxctl CLI You can configure PKI authentication by using the roxctl CLI. Procedure Run the following command: USD roxctl -e <hostname>:<port_number> central userpki create -c <ca_certificate_file> -r <default_role_name> <provider_name> 19.2.3. Updating authentication keys and certificates You can update your authentication keys and certificates by using the RHACS portal. Procedure Create a new authentication provider. Copy the role mappings from your old authentication provider to the new authentication provider. Rename or delete the old authentication provider with the old root CA key. 19.2.4. Logging in by using a client certificate After you configure PKI authentication, users see a certificate prompt in the RHACS portal login page. The prompt only shows up if a client certificate trusted by the configured root CA is installed on the user's system. Use the procedure described in this section to log in by using a client certificate. Procedure Open the RHACS portal. Select a certificate in the browser prompt. On the login page, select the authentication provider name option to log in with a certificate. If you do not want to log in by using the certificate, you can also log in by using the administrator password or another login method. Note Once you use a client certificate to log into the RHACS portal, you cannot log in with a different certificate unless you restart your browser. 19.3. Understanding authentication providers An authentication provider connects to a third-party source of user identity (for example, an identity provider or IDP), gets the user identity, issues a token based on that identity, and returns the token to Red Hat Advanced Cluster Security for Kubernetes (RHACS). This token allows RHACS to authorize the user. RHACS uses the token within the user interface and API calls. After installing RHACS, you must set up your IDP to authorize users. Note If you are using OpenID Connect (OIDC) as your IDP, RHACS relies on mapping rules that examine the values of specific claims like groups , email , userid and name from either the user ID token or the UserInfo endpoint response to authorize the users. If these details are absent, the mapping cannot succeed and the user does not get access to the required resources. Therefore, you need to ensure that the claims required to authorize users from your IDP, for example, groups , are included in the authentication response of your IDP to enable successful mapping. Additional resources Configuring Okta Identity Cloud as a SAML 2.0 identity provider Configuring Google Workspace as an OIDC identity provider Configuring OpenShift Container Platform OAuth server as an identity provider Connecting Azure AD to RHACS using SSO configuration 19.3.1. Claim mappings A claim is the data an identity provider includes about a user inside the token they issue. Using claim mappings, you can specify if RHACS should customize the claim attribute it receives from an IDP to another attribute in the RHACS-issued token. If you do not use the claim mapping, RHACS does not include the claim attribute in the RHACS-issued token. For example, you can map from roles in the user identity to groups in the RHACS-issued token using claim mapping. RHACS uses different default claim mappings for every authentication provider. 19.3.1.1. OIDC default claim mappings The following list provides the default OIDC claim mappings: sub to userid name to name email to email groups to groups 19.3.1.2. Auth0 default claim mappings The Auth0 default claim mappings are the same as the OIDC default claim mappings. 19.3.1.3. SAML 2.0 default claim mappings The following list applies to SAML 2.0 default claim mappings: Subject.NameID is mapped to userid every SAML AttributeStatement.Attribute from the response gets mapped to its name 19.3.1.4. Google IAP default claim mappings The following list provides the Google IAP default claim mappings: sub to userid email to email hd to hd google.access_levels to access_levels 19.3.1.5. User certificates default claim mappings User certificates differ from all other authentication providers because instead of communicating with a third-party IDP, they get user information from certificates used by the user. The default claim mappings for user certificates include: CertFingerprint to userid Subject Common Name to name EmailAddresses to email Subject Organizational Unit to groups 19.3.1.6. OpenShift Auth default claim mappings The following list provides the OpenShift Auth default claim mappings: groups to groups uid to userid name to name 19.3.2. Rules To authorize users, RHACS relies on mapping rules that examine the values of specific claims such as groups , email , userid , and name from the user identity. Rules allow mapping of users who have attributes with a specific value to a specific role. As an example, a rule could include the following:`key` is email , value is [email protected] , role is Admin . If the claim is missing, the mapping cannot succeed, and the user does not get access to the required resources. Therefore, to enable successful mapping, you must ensure that the authentication response from your IDP includes the required claims to authorize users, for example, groups . 19.3.3. Minimum access role RHACS assigns a minimum access role to every caller with a RHACS token issued by a particular authentication provider. The minimum access role is set to None by default. For example, suppose there is an authentication provider with the minimum access role of Analyst . In that case, all users who log in using this provider will have the Analyst role assigned to them. 19.3.4. Required attributes Required attributes can restrict issuing of the RHACS token based on whether a user identity has an attribute with a specific value. For example, you can configure RHACS only to issue a token when the attribute with key is_internal has the attribute value true . Users with the attribute is_internal set to false or not set do not get a token. 19.4. Configuring identity providers 19.4.1. Configuring Okta Identity Cloud as a SAML 2.0 identity provider You can use Okta as a single sign-on (SSO) provider for Red Hat Advanced Cluster Security for Kubernetes (RHACS). 19.4.1.1. Creating an Okta app Before you can use Okta as a SAML 2.0 identity provider for Red Hat Advanced Cluster Security for Kubernetes, you must create an Okta app. Warning Okta's Developer Console does not support the creation of custom SAML 2.0 applications. If you are using the Developer Console , you must first switch to the Admin Console ( Classic UI ). To switch, click Developer Console in the top left of the page and select Classic UI . Prerequisites You must have an account with administrative privileges for the Okta portal. Procedure On the Okta portal, select Applications from the menu bar. Click Add Application and then select Create New App . In the Create a New Application Integration dialog box, leave Web as the platform and select SAML 2.0 as the protocol that you want to sign in users. Click Create . On the General Settings page, enter a name for the app in the App name field. Click . On the SAML Settings page, set values for the following fields: Single sign on URL Specify it as https://<RHACS_portal_hostname>/sso/providers/saml/acs . Leave the Use this for Recipient URL and Destination URL option checked. If your RHACS portal is accessible at different URLs, you can add them here by checking the Allow this app to request other SSO URLs option and add the alternative URLs using the specified format. Audience URI (SP Entity ID) Set the value to RHACS or another value of your choice. Remember the value you choose; you will need this value when you configure Red Hat Advanced Cluster Security for Kubernetes. Attribute Statements You must add at least one attribute statement. Red Hat recommends using the email attribute: Name: email Format: Unspecified Value: user.email Verify that you have configured at least one Attribute Statement before continuing. Click . On the Feedback page, select an option that applies to you. Select an appropriate App type . Click Finish . After the configuration is complete, you are redirected to the Sign On settings page for the new app. A yellow box contains links to the information that you need to configure Red Hat Advanced Cluster Security for Kubernetes. After you have created the app, assign Okta users to this application. Go to the Assignments tab, and assign the set of individual users or groups that can access Red Hat Advanced Cluster Security for Kubernetes. For example, assign the group Everyone to allow all users in the organization to access Red Hat Advanced Cluster Security for Kubernetes. 19.4.1.2. Configuring a SAML 2.0 identity provider Use the instructions in this section to integrate a Security Assertion Markup Language (SAML) 2.0 identity provider with Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites You must have permissions to configure identity providers in RHACS. For Okta identity providers, you must have an Okta app configured for RHACS. Procedure In the RHACS portal, go to Platform Configuration Access Control . Click Create auth provider and select SAML 2.0 from the drop-down list. In the Name field, enter a name to identify this authentication provider; for example, Okta or Google . The integration name is shown on the login page to help users select the correct sign-in option. In the ServiceProvider issuer field, enter the value that you are using as the Audience URI or SP Entity ID in Okta, or a similar value in other providers. Select the type of Configuration : Option 1: Dynamic Configuration : If you select this option, enter the IdP Metadata URL , or the URL of Identity Provider metadata available from your identity provider console. The configuration values are acquired from the URL. Option 2: Static Configuration : Copy the required static fields from the View Setup Instructions link in the Okta console, or a similar location for other providers: IdP Issuer IdP SSO URL Name/ID Format IdP Certificate(s) (PEM) Assign a Minimum access role for users who access RHACS using SAML. Tip Set the Minimum access role to Admin while you complete setup. Later, you can return to the Access Control page to set up more tailored access rules based on user metadata from your identity provider. Click Save . Important If your SAML identity provider's authentication response meets the following criteria: Includes a NotValidAfter assertion: The user session remains valid until the time specified in the NotValidAfter field has elapsed. After the user session expires, users must reauthenticate. Does not include a NotValidAfter assertion: The user session remains valid for 30 days, and then users must reauthenticate. Verification In the RHACS portal, go to Platform Configuration Access Control . Select the Auth Providers tab. Click the authentication provider for which you want to verify the configuration. Select Test login from the Auth Provider section header. The Test login page opens in a new browser tab. Sign in with your credentials. If you logged in successfully, RHACS shows the User ID and User Attributes that the identity provider sent for the credentials that you used to log in to the system. If your login attempt failed, RHACS shows a message describing why the identity provider's response could not be processed. Close the Test login browser tab. Note Even if the response indicates successful authentication, you might need to create additional access rules based on the user metadata from your identity provider. 19.4.2. Configuring Google Workspace as an OIDC identity provider You can use Google Workspace as a single sign-on (SSO) provider for Red Hat Advanced Cluster Security for Kubernetes. 19.4.2.1. Setting up OAuth 2.0 credentials for your GCP project To configure Google Workspace as an identity provider for Red Hat Advanced Cluster Security for Kubernetes, you must first configure OAuth 2.0 credentials for your GCP project. Prerequisites You must have administrator-level access to your organization's Google Workspace account to create a new project, or permissions to create and configure OAuth 2.0 credentials for an existing project. Red Hat recommends that you create a new project for managing access to Red Hat Advanced Cluster Security for Kubernetes. Procedure Create a new Google Cloud Platform (GCP) project, see the Google documentation topic creating and managing projects . After you have created the project, open the Credentials page in the Google API Console. Verify the project name listed in the upper left corner near the logo to make sure that you are using the correct project. To create new credentials, go to Create Credentials OAuth client ID . Choose Web application as the Application type . In the Name box, enter a name for the application, for example, RHACS . In the Authorized redirect URIs box, enter https://<stackrox_hostname>:<port_number>/sso/providers/oidc/callback . replace <stackrox_hostname> with the hostname on which you expose your Central instance. replace <port_number> with the port number on which you expose Central. If you are using the standard HTTPS port 443 , you can omit the port number. Click Create . This creates an application and credentials and redirects you back to the credentials page. An information box opens, showing details about the newly created application. Close the information box. Copy and save the Client ID that ends with .apps.googleusercontent.com . You can check this client ID by using the Google API Console. Select OAuth consent screen from the navigation menu on the left. Note The OAuth consent screen configuration is valid for the entire GCP project, and not only to the application you created in the steps. If you already have an OAuth consent screen configured in this project and want to apply different settings for Red Hat Advanced Cluster Security for Kubernetes login, create a new GCP project. On the OAuth consent screen page: Choose the Application type as Internal . If you select Public , anyone with a Google account can sign in. Enter a descriptive Application name . This name is shown to users on the consent screen when they sign in. For example, use RHACS or <organization_name> SSO for Red Hat Advanced Cluster Security for Kubernetes . Verify that the Scopes for Google APIs only lists email , profile , and openid scopes. Only these scopes are required for single sign-on. If you grant additional scopes it increases the risk of exposing sensitive data. 19.4.2.2. Specifying a client secret Red Hat Advanced Cluster Security for Kubernetes version 3.0.39 and newer supports the OAuth 2.0 Authorization Code Grant authentication flow when you specify a client secret. When you use this authentication flow, Red Hat Advanced Cluster Security for Kubernetes uses a refresh token to keep users logged in beyond the token expiration time configured in your OIDC identity provider. When users log out, Red Hat Advanced Cluster Security for Kubernetes deletes the refresh token from the client-side. Additionally, if your identity provider API supports refresh token revocation, Red Hat Advanced Cluster Security for Kubernetes also sends a request to your identity provider to revoke the refresh token. You can specify a client secret when you configure Red Hat Advanced Cluster Security for Kubernetes to integrate with an OIDC identity provider. Note You cannot use a Client Secret with the Fragment Callback mode . You cannot edit configurations for existing authentication providers. You must create a new OIDC integration in Red Hat Advanced Cluster Security for Kubernetes if you want to use a Client Secret . Red Hat recommends using a client secret when connecting Red Hat Advanced Cluster Security for Kubernetes with an OIDC identity provider. If you do not want to use a Client Secret , you must select the Do not use Client Secret (not recommended) option. 19.4.2.3. Configuring an OIDC identity provider You can configure Red Hat Advanced Cluster Security for Kubernetes (RHACS) to use your OpenID Connect (OIDC) identity provider. Prerequisites You must have already configured an application in your identity provider, such as Google Workspace. You must have permissions to configure identity providers in RHACS. Procedure In the RHACS portal, go to Platform Configuration Access Control . Click Create auth provider and select OpenID Connect from the drop-down list. Enter information in the following fields: Name : A name to identify your authentication provider; for example, Google Workspace . The integration name is shown on the login page to help users select the correct sign-in option. Callback mode : Select Auto-select (recommended) , which is the default value, unless the identity provider requires another mode. Note Fragment mode is designed around the limitations of Single Page Applications (SPAs). Red Hat only supports the Fragment mode for early integrations and does not recommended using it for later integrations. Issuer : The root URL of your identity provider; for example, https://accounts.google.com for Google Workspace. See your identity provider documentation for more information. Note If you are using RHACS version 3.0.49 and later, for Issuer you can perform these actions: Prefix your root URL with https+insecure:// to skip TLS validation. This configuration is insecure and Red Hat does not recommended it. Only use it for testing purposes. Specify query strings; for example, ?key1=value1&key2=value2 along with the root URL. RHACS appends the value of Issuer as you entered it to the authorization endpoint. You can use it to customize your provider's login screen. For example, you can optimize the Google Workspace login screen to a specific hosted domain by using the hd parameter , or preselect an authentication method in PingFederate by using the pfidpadapterid parameter . Client ID : The OIDC Client ID for your configured project. Client Secret : Enter the client secret provided by your identity provider (IdP). If you are not using a client secret, which is not recommended, select Do not use Client Secret . Assign a Minimum access role for users who access RHACS using the selected identity provider. Tip Set the Minimum access role to Admin while you complete setup. Later, you can return to the Access Control page to set up more tailored access rules based on user metadata from your identity provider. To add access rules for users and groups accessing RHACS, click Add new rule in the Rules section. For example, to give the Admin role to a user called administrator , you can use the following key-value pairs to create access rules: Key Value Name administrator Role Admin Click Save . Verification In the RHACS portal, go to Platform Configuration Access Control . Select the Auth providers tab. Select the authentication provider for which you want to verify the configuration. Select Test login from the Auth Provider section header. The Test login page opens in a new browser tab. Log in with your credentials. If you logged in successfully, RHACS shows the User ID and User Attributes that the identity provider sent for the credentials that you used to log in to the system. If your login attempt failed, RHACS shows a message describing why the identity provider's response could not be processed. Close the Test Login browser tab. 19.4.3. Configuring OpenShift Container Platform OAuth server as an identity provider OpenShift Container Platform includes a built-in OAuth server that you can use as an authentication provider for Red Hat Advanced Cluster Security for Kubernetes (RHACS). 19.4.3.1. Configuring OpenShift Container Platform OAuth server as an identity provider To integrate the built-in OpenShift Container Platform OAuth server as an identity provider for RHACS, use the instructions in this section. Prerequisites You must have the Access permission to configure identity providers in RHACS. You must have already configured users and groups in OpenShift Container Platform OAuth server through an identity provider. For information about the identity provider requirements, see Understanding identity provider configuration . Note The following procedure configures only a single main route named central for the OpenShift Container Platform OAuth server. Procedure In the RHACS portal, go to Platform Configuration Access Control . Click Create auth provider and select OpenShift Auth from the drop-down list. Enter a name for the authentication provider in the Name field. Assign a Minimum access role for users that access RHACS using the selected identity provider. A user must have the permissions granted to this role or a role with higher permissions to log in to RHACS. Tip For security, Red Hat recommends first setting the Minimum access role to None while you complete setup. Later, you can return to the Access Control page to set up more tailored access rules based on user metadata from your identity provider. Optional: To add access rules for users and groups accessing RHACS, click Add new rule in the Rules section, then enter the rule information and click Save . You will need attributes for the user or group so that you can configure access. Tip Group mappings are more robust because groups are usually associated with teams or permissions sets and require modification less often than users. To get user information in OpenShift Container Platform, you can use one of the following methods: Click User Management Users <username > YAML . Access the k8s/cluster/user.openshift.io~v1~User/<username>/yaml file and note the values for name , uid ( userid in RHACS), and groups . Use the OpenShift Container Platform API as described in the OpenShift Container Platform API reference . The following configuration example describes how to configure rules for an Admin role with the following attributes: name : administrator groups : ["system:authenticated", "system:authenticated:oauth", "myAdministratorsGroup"] uid : 12345-00aa-1234-123b-123fcdef1234 You can add a rule for this administrator role using one of the following steps: To configure a rule for a name, select name from the Key drop-down list, enter administrator in the Value field, then select Administrator under Role . To configure a rule for a group, select groups from the Key drop-down list, enter myAdministratorsGroup in the Value field, then select Admin under Role . To configure a rule for a user name, select userid from the Key drop-down list, enter 12345-00aa-1234-123b-123fcdef1234 in the Value field, then select Admin under Role . Important If you use a custom TLS certificate for OpenShift Container Platform OAuth server, you must add the root certificate of the CA to Red Hat Advanced Cluster Security for Kubernetes as a trusted root CA. Otherwise, Central cannot connect to the OpenShift Container Platform OAuth server. To enable the OpenShift Container Platform OAuth server integration when installing Red Hat Advanced Cluster Security for Kubernetes using the roxctl CLI, set the ROX_ENABLE_OPENSHIFT_AUTH environment variable to true in Central: USD oc -n stackrox set env deploy/central ROX_ENABLE_OPENSHIFT_AUTH=true For access rules, the OpenShift Container Platform OAuth server does not return the key Email . Additional resources Configuring an LDAP identity provider Adding trusted certificate authorities 19.4.3.2. Creating additional routes for OpenShift Container Platform OAuth server When you configure OpenShift Container Platform OAuth server as an identity provider by using Red Hat Advanced Cluster Security for Kubernetes portal, RHACS configures only a single route for the OAuth server. However, you can create additional routes by specifying them as annotations in the Central custom resource. Prerequisites You must have configured Service accounts as OAuth clients for your OpenShift Container Platform OAuth server. Procedure If you installed RHACS using the RHACS Operator: Create a CENTRAL_ADDITIONAL_ROUTES environment variable that contains a patch for the Central custom resource: USD CENTRAL_ADDITIONAL_ROUTES=' spec: central: exposure: loadBalancer: enabled: false port: 443 nodePort: enabled: false route: enabled: true persistence: persistentVolumeClaim: claimName: stackrox-db customize: annotations: serviceaccounts.openshift.io/oauth-redirecturi.main: sso/providers/openshift/callback 1 serviceaccounts.openshift.io/oauth-redirectreference.main: "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"central\"}}" 2 serviceaccounts.openshift.io/oauth-redirecturi.second: sso/providers/openshift/callback 3 serviceaccounts.openshift.io/oauth-redirectreference.second: "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"second-central\"}}" 4 ' 1 The redirect URI for setting the main route. 2 The redirect URI reference for the main route. 3 The redirect for setting the second route. 4 The redirect reference for the second route. Apply the CENTRAL_ADDITIONAL_ROUTES patch to the Central custom resource: USD oc patch centrals.platform.stackrox.io \ -n <namespace> \ 1 <custom-resource> \ 2 --patch "USDCENTRAL_ADDITIONAL_ROUTES" \ --type=merge 1 Replace <namespace> with the name of the project that contains the Central custom resource. 2 Replace <custom-resource> with the name of the Central custom resource. Or, if you installed RHACS using Helm: Add the following annotations to your values-public.yaml file: customize: central: annotations: serviceaccounts.openshift.io/oauth-redirecturi.main: sso/providers/openshift/callback 1 serviceaccounts.openshift.io/oauth-redirectreference.main: "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"central\"}}" 2 serviceaccounts.openshift.io/oauth-redirecturi.second: sso/providers/openshift/callback 3 serviceaccounts.openshift.io/oauth-redirectreference.second: "{\"kind\":\"OAuthRedirectReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"Route\",\"name\":\"second-central\"}}" 4 1 The redirect for setting the main route. 2 The redirect reference for the main route. 3 The redirect for setting the second route. 4 The redirect reference for the second route. Apply the custom annotations to the Central custom resource by using helm upgrade : USD helm upgrade -n stackrox \ stackrox-central-services rhacs/central-services \ -f <path_to_values_public.yaml> 1 1 Specify the path of the values-public.yaml configuration file using the -f option. Additional resources Service accounts as OAuth clients Redirect URIs for service accounts as OAuth clients 19.4.4. Connecting Azure AD to RHACS using SSO configuration To connect an Azure Active Directory (AD) to RHACS using Sign-On (SSO) configuration, you need to add specific claims (for example, group claim to tokens) and assign users, groups, or both to the enterprise application. 19.4.4.1. Adding group claims to tokens for SAML applications using SSO configuration Configure the application registration in Azure AD to include group claims in tokens. For instructions, see Add group claims to tokens for SAML applications using SSO configuration . Important Verify that you are using the latest version of Azure AD. For more information on how to upgrade Azure AD to the latest version, see Azure AD Connect: Upgrade from a version to the latest . 19.5. Removing the admin user Red Hat Advanced Cluster Security for Kubernetes (RHACS) creates an administrator account, admin , during the installation process that can be used to log in with a user name and password. The password is dynamically generated unless specifically overridden and is unique to your RHACS instance. In production environments, it is highly recommended to create an authentication provider and remove the admin user. 19.5.1. Removing the admin user after installation After an authentication provider has been successfully created, it is strongly recommended to remove the admin user. Removing the admin user is dependent on the installation method of the RHACS portal. Procedure Perform one of the following procedures: For Operator installations, set central.adminPasswordGenerationDisabled to true in your Central custom resource. For Helm installations: In your Central Helm configuration, set central.adminPassword.generate to false . Follow the steps to change the configuration. See "Changing configuration options after deployment" for more information. For roxctl installations: When generating the manifest, set Disable password generation to false . Follow the steps to install Central by using roxctl to apply the changes. See "Install Central using the roxctl CLI" for more information. Additional resources Changing configuration options after deploying the central-services Helm chart (OpenShift Container Platform) Changing configuration options after deploying the central-services Helm chart (Kubernetes) Install Central using the roxctl CLI After applying the configuration changes, you cannot log in as an admin user. Note You can add the admin user again as a fallback by reverting the configuration changes. When enabling the admin user again, a new password is generated. 19.6. Configuring short-lived access Red Hat Advanced Cluster Security for Kubernetes (RHACS) provides the ability to configure short-lived access to the user interface and API calls. You can configure this by exchanging OpenID Connect (OIDC) identity tokens for a RHACS-issued token. We recommend this especially for Continuous Integration (CI) usage, where short-lived access is preferable over long-lived API tokens. The following steps outline the high-level workflow on how to configure short-lived access to the user interface and API calls: Configuring RHACS to trust OIDC identity token issuers for exchanging short-lived RHACS-issued tokens. Exchanging an OIDC identity token for a short-lived RHACS-issued token by calling the API. Note To prevent privilege escalation, when you create a new token, your role's permissions limit the permission you can assign to that token. For example, if you only have read permission for the Integration resource, you cannot create a token with write permission. If you want a custom role to create tokens for other users to use, you must assign the required permissions to that custom role. Use short-lived tokens for machine-to-machine communication, such as CI/CD pipelines, scripts, and other automation. Also, use the roxctl central login command for human-to-machine communication, such as roxctl CLI or API access. The majority of cloud service providers support OIDC identity tokens, for example, Microsoft Entra ID, Google Cloud Identity Platform, and AWS Cognito. OIDC identity tokens issued by these services can be used for RHACS short-lived access. Additional resources Using Azure Entra ID service principals for machine to machine auth with RHACS Using an authentication provider to authenticate with roxctl Configuring API tokens 19.6.1. Configure short-lived access for an OIDC identity token issuer Start configuring short-lived access for an OpenID Connect (OIDC) identity token issuer. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll to the Authentication Tokens category, and then click Machine access configuration . Click Create configuration . Select the configuration type , choosing one of the following: Generic if you use an arbitrary OIDC identity token issuer. GitHub Actions if you plan to access RHACS from GitHub Actions. Enter the OIDC identity token issuer. Enter the token lifetime for tokens issued by the configuration. Note The format for the token lifetime is XhYmZs and you cannot set it for longer than 24 hours. Add rules to the configuration: The Key is the OIDC token's claim to use. The Value is the expected OIDC token claim value. The Role is the role to assign to the token if the OIDC token claim and value exist. Note Rules are similar to Authentication Provider rules to assign roles based on claim values. As a general rule, Red Hat recommends to use unique, immutable claims within Rules. The general recommendation is to use the sub claim within the OIDC identity token. For more information about OIDC token claims, see the list of standard OIDC claims . Click Save . 19.6.2. Exchanging an identity token Prerequisites You have a valid OpenID Connect (OIDC) token. You added a Machine access configuration for the RHACS instance you want to access. Procedure Prepare the POST request's JSON data: { "idToken": "<id_token>" } Send a POST request to the API /v1/auth/m2m/exchange . Wait for the API response: { "accessToken": "<access_token>" } Use the returned access token to access the RHACS instance. Note If you are using GitHub Actions , you can use the stackrox/central-login GitHub Action . 19.7. Understanding multi-tenancy Red Hat Advanced Cluster Security for Kubernetes provides ways to implement multi-tenancy within a Central instance. You can implement multi-tenancy by using role-based access control (RBAC) and access scopes within RHACS. 19.7.1. Understanding resource scoping RHACS includes resources which are used within RBAC. In addition to associating permissions for a resource, each resource is also scoped. In RHACS, resources are scoped as the following types: Global scope, where a resource is not assigned to any cluster or namespace Cluster scope, where a resource is assigned to particular clusters Namespace scope, where a resource is assigned to particular namespaces The scope of resources is important when creating custom access scopes. Custom access scopes are used to create multi-tenancy within RHACS. Only resources which are cluster or namespace scoped are applicable for scoping in access scopes. Globally scoped resources are not scoped by access scopes. Therefore, multi-tenancy within RHACS can only be achieved for resources that are scoped either by cluster or namespace. 19.7.2. Multi-tenancy per namespace configuration example A common example for multi-tenancy within RHACS is associating users with a specific namespace and only allowing them access to their specific namespace. The following example combines a custom permission set, access scope, and role. The user or group assigned with this role can only see CVE information, violations, and information about deployments in the particular namespace or cluster scoped to them. Procedure In the RHACS portal, select Platform Configuration Access Control . Select Permission Sets . Click Create permission set . Enter a Name and Description for the permission set. Select the following resources and access level and click Save : READ Alert READ Deployment READ DeploymentExtension READ Image READ K8sRole READ K8sRoleBinding READ K8sSubject READ NetworkGraph READ NetworkPolicy READ Secret READ ServiceAccount Select Access Scopes . Click Create access scope . Enter a Name and Description for the access scope. In the Allowed resources section, select the namespace you want to use for scoping and click Save . Select Roles . Click Create role . Enter a Name and Description for the role. Select the previously created Permission Set and Access scope for the role and click Save . Assign the role to your required user or group. See Assigning a role to a user or a group . Note The RHACS dashboard options for users with the sample role are minimal compared to options available to an administrator. Only relevant pages are visible for the user. 19.7.3. Limitations Achieving multi-tenancy within RHACS is not possible for resources with a global scope . The following resources have a global scope: Access Administration Detection Integration VulnerabilityManagementApprovals VulnerabilityManagementRequests WatchedImage WorkflowAdministration These resources are shared across all users within a RHACS Central instance and cannot be scoped. Additional resources Creating a custom permission set Create a custom access scope Create a custom role | [
"roxctl -e <hostname>:<port_number> central userpki create -c <ca_certificate_file> -r <default_role_name> <provider_name>",
"oc -n stackrox set env deploy/central ROX_ENABLE_OPENSHIFT_AUTH=true",
"CENTRAL_ADDITIONAL_ROUTES=' spec: central: exposure: loadBalancer: enabled: false port: 443 nodePort: enabled: false route: enabled: true persistence: persistentVolumeClaim: claimName: stackrox-db customize: annotations: serviceaccounts.openshift.io/oauth-redirecturi.main: sso/providers/openshift/callback 1 serviceaccounts.openshift.io/oauth-redirectreference.main: \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"central\\\"}}\" 2 serviceaccounts.openshift.io/oauth-redirecturi.second: sso/providers/openshift/callback 3 serviceaccounts.openshift.io/oauth-redirectreference.second: \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"second-central\\\"}}\" 4 '",
"oc patch centrals.platform.stackrox.io -n <namespace> \\ 1 <custom-resource> \\ 2 --patch \"USDCENTRAL_ADDITIONAL_ROUTES\" --type=merge",
"customize: central: annotations: serviceaccounts.openshift.io/oauth-redirecturi.main: sso/providers/openshift/callback 1 serviceaccounts.openshift.io/oauth-redirectreference.main: \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"central\\\"}}\" 2 serviceaccounts.openshift.io/oauth-redirecturi.second: sso/providers/openshift/callback 3 serviceaccounts.openshift.io/oauth-redirectreference.second: \"{\\\"kind\\\":\\\"OAuthRedirectReference\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"reference\\\":{\\\"kind\\\":\\\"Route\\\",\\\"name\\\":\\\"second-central\\\"}}\" 4",
"helm upgrade -n stackrox stackrox-central-services rhacs/central-services -f <path_to_values_public.yaml> 1",
"{ \"idToken\": \"<id_token>\" }",
"{ \"accessToken\": \"<access_token>\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/operating/managing-user-access |
Preface | Preface Preface | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/preface |
Appendix D. Ceph File System mirrors configuration reference | Appendix D. Ceph File System mirrors configuration reference This section lists configuration options for Ceph File System (CephFS) mirrors. cephfs_mirror_max_concurrent_directory_syncs Description Maximum number of directory snapshots that can be synchronized concurrently by cephfs-mirror daemon. Controls the number of synchronization threads. Type Integer Default 3 Min 1 cephfs_mirror_action_update_interval Description Interval in seconds to process pending mirror update actions. Type secs Default 2 Min 1 cephfs_mirror_restart_mirror_on_blocklist_interval Description Interval in seconds to restart blocklisted mirror instances. Setting to zero ( 0 ) disables restarting blocklisted instances. Type secs Default 30 Min 0 cephfs_mirror_max_snapshot_sync_per_cycle Description Maximum number of snapshots to mirror when a directory is picked up for mirroring by worker threads. Type Integer Default 3 Min 1 cephfs_mirror_directory_scan_interval Description Interval in seconds to scan configured directories for snapshot mirroring. Type Integer Default 10 Min 1 cephfs_mirror_max_consecutive_failures_per_directory Description Number of consecutive snapshot synchronization failures to mark a directory as "failed". Failed directories are retried for synchronization less frequently. Type Integer Default 10 Min 0 cephfs_mirror_retry_failed_directories_interval Description Interval in seconds to retry synchronization for failed directories. Type Integer Default 60 Min 1 cephfs_mirror_restart_mirror_on_failure_interval Description Interval in seconds to restart failed mirror instances. Setting to zero ( 0 ) disables restarting failed mirror instances. Type secs Default 20 Min 0 cephfs_mirror_mount_timeout Description Timeout in seconds for mounting primary or secondary CephFS by the cephfs-mirror daemon. Setting this to a higher value could result in the mirror daemon getting stalled when mounting a file system if the cluster is not reachable. This option is used to override the usual client_mount_timeout . Type secs Default 10 Min 0 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/file_system_guide/ceph-file-system-mirrors-configuration-reference_fs |
5.4. Configuring Fence Devices | 5.4. Configuring Fence Devices Configuring fence devices for the cluster consists of selecting one or more fence devices and specifying fence-device-dependent parameters (for example, name, IP address, login, and password). To configure fence devices, follow these steps: Click Fence Devices . At the bottom of the right frame (labeled Properties ), click the Add a Fence Device button. Clicking Add a Fence Device causes the Fence Device Configuration dialog box to be displayed (refer to Figure 5.4, "Fence Device Configuration" ). Figure 5.4. Fence Device Configuration At the Fence Device Configuration dialog box, click the drop-down box under Add a New Fence Device and select the type of fence device to configure. Specify the information in the Fence Device Configuration dialog box according to the type of fence device. Refer to Appendix B, Fence Device Parameters for more information about fence device parameters. Click OK . Choose File => Save to save the changes to the cluster configuration. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-config-fence-devices-CA |
Chapter 2. Understanding AMQ Broker | Chapter 2. Understanding AMQ Broker AMQ Broker enables you to loosely couple heterogeneous systems together, while providing reliability, transactions, and many other features. Before using AMQ Broker, you should understand the capabilities it offers. 2.1. Broker instances In AMQ Broker, the installed AMQ Broker software serves as a "home" for one or more broker instances . This architecture provides several benefits, such as: You can create as many broker instances as you require from a single AMQ Broker installation. The AMQ Broker installation contains the necessary binaries and resources that each broker instance needs to run. These resources are then shared between the broker instances. When upgrading to a new version of AMQ Broker, you only need to update the software once, even if you are running multiple broker instances on that host. You can think of a broker instance as a message broker. Each broker instance has its own directory containing its unique configuration and runtime data. This runtime data consists of logs and data files, and is associated with a unique broker process on the host. 2.2. Message persistence AMQ Broker persists message data to ensure that messages are never lost, even if the broker fails or shuts down unexpectedly. AMQ Broker provides two options for message persistence: journal-based persistence and database persistence. Journal-based persistence The default method, this option writes message data to message journal files stored on the file system. Initially, each of these journal files is created automatically with a fixed size and filled with empty data. As clients perform various broker operations, records are appended to the journal. When one of the journal files is full, the broker moves to the journal file. Journal-based persistence supports transactional operations, including both local and XA transactions. Journal-based persistence requires an IO interface to the file system. AMQ Broker supports the following: Linux Asynchronous IO (AIO) AIO typically provides the best performance. Make sure that your network file system is listed as supported in Red Hat AMQ 7 Supported Configurations . Java NIO Java NIO provides good performance, and it can run on any platform with a Java 6 or later runtime. Database persistence This option stores message and bindings data in a database by using Java Database Connectivity (JDBC). This option is a good choice if you already have a reliable and high performing database platform in your environment, or if using a database is mandated by company policy. The broker JDBC persistence store uses a standard JDBC driver to create a JDBC connection that stores message and bindings data in database tables. The data in the database tables is encoded using the same encoding as journal-based persistence. This means that messages stored in the database are not human-readable if accessed directly using SQL. To use database persistence, you must use a supported database platform. To see the currently supported database platforms, see Red Hat AMQ 7 Supported Configurations . 2.3. Resource consumption AMQ Broker provides a number of options to limit memory and resource consumption on the broker. Resource limits You can set connection and queue limits for each user. This prevents users from consuming too many of the broker's resources and causing degraded performance for other users. Message paging Message paging enables AMQ Broker to support large queues containing millions of messages while also running with a limited amount of memory. When the broker receives a surge of messages that exceeds its memory capacity, it begins paging messages to disk. This paging process is transparent; the broker pages messages into and out of memory as needed. Message paging is address-based. When the size of all messages in memory for an address exceeds the maximum size, each additional message for the address will be paged to the address's page file. Large messages With AMQ Broker, you can send and receive huge messages, even when running with limited memory resources. To avoid the overhead of storing large messages in memory, you can configure AMQ Broker to store these large messages in the file system or in a database table. 2.4. Monitoring and management AMQ Broker provides several tools you can use to monitor and manage your brokers. AMQ Management Console AMQ Management Console is a web interface accessible through a web browser. You can use to monitor network health, view broker topology, and create and delete broker resources. CLI AMQ Broker provides the artemis CLI, which you can use to administer your brokers. Using the CLI, you can create, start, and stop broker instances. The CLI also provides several commands for managing the message journal. Management API AMQ Broker provides an extensive management API. You can use it to modify a broker's configuration, create new resources, inspect these resources, and interact with them. Clients can also use the management API to manage the broker and subscribe to management notifications. AMQ Broker provides the following methods for using the management API: Java Management Extensions (JMX) - JMX is a standard technology for managing Java applications. The broker's management operations are exposed through AMQ MBeans interfaces. JMS API - Management operations are sent using standard JMS messages to a special management JMS queue. Logs Each broker instance logs error messages, warnings, and other broker-related information and activities. You can configure the logging levels, the location of the log files, and log format. You can then use the resulting log files to monitor the broker and diagnose error conditions. | null | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/getting_started_with_amq_broker/understanding-getting-started |
Chapter 4. RoleBinding [rbac.authorization.k8s.io/v1] | Chapter 4. RoleBinding [rbac.authorization.k8s.io/v1] Description RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace. Type object Required roleRef 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. roleRef object RoleRef contains information that points to the role being used subjects array Subjects holds references to the objects the role applies to. subjects[] object Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. 4.1.1. .roleRef Description RoleRef contains information that points to the role being used Type object Required apiGroup kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 4.1.2. .subjects Description Subjects holds references to the objects the role applies to. Type array 4.1.3. .subjects[] Description Subject contains a reference to the object or user identities a role binding applies to. This can either hold a direct API object reference, or a value for non-objects such as user and group names. Type object Required kind name Property Type Description apiGroup string APIGroup holds the API group of the referenced subject. Defaults to "" for ServiceAccount subjects. Defaults to "rbac.authorization.k8s.io" for User and Group subjects. kind string Kind of object being referenced. Values defined by this API group are "User", "Group", and "ServiceAccount". If the Authorizer does not recognized the kind value, the Authorizer should report an error. name string Name of the object being referenced. namespace string Namespace of the referenced object. If the object kind is non-namespace, such as "User" or "Group", and this value is not empty the Authorizer should report an error. 4.2. API endpoints The following API endpoints are available: /apis/rbac.authorization.k8s.io/v1/rolebindings GET : list or watch objects of kind RoleBinding /apis/rbac.authorization.k8s.io/v1/watch/rolebindings GET : watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings DELETE : delete collection of RoleBinding GET : list or watch objects of kind RoleBinding POST : create a RoleBinding /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings GET : watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings/{name} DELETE : delete a RoleBinding GET : read the specified RoleBinding PATCH : partially update the specified RoleBinding PUT : replace the specified RoleBinding /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings/{name} GET : watch changes to an object of kind RoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/rbac.authorization.k8s.io/v1/rolebindings Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind RoleBinding Table 4.2. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty 4.2.2. /apis/rbac.authorization.k8s.io/v1/watch/rolebindings Table 4.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. Table 4.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings Table 4.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of RoleBinding Table 4.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 4.8. Body parameters Parameter Type Description body DeleteOptions schema Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind RoleBinding Table 4.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty HTTP method POST Description create a RoleBinding Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. Body parameters Parameter Type Description body RoleBinding schema Table 4.14. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 202 - Accepted RoleBinding schema 401 - Unauthorized Empty 4.2.4. /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings Table 4.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of RoleBinding. deprecated: use the 'watch' parameter with a list operation instead. Table 4.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/rolebindings/{name} Table 4.18. Global path parameters Parameter Type Description name string name of the RoleBinding namespace string object name and auth scope, such as for teams and projects Table 4.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a RoleBinding Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.21. Body parameters Parameter Type Description body DeleteOptions schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RoleBinding Table 4.23. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RoleBinding Table 4.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.25. Body parameters Parameter Type Description body Patch schema Table 4.26. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RoleBinding Table 4.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.28. Body parameters Parameter Type Description body RoleBinding schema Table 4.29. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty 4.2.6. /apis/rbac.authorization.k8s.io/v1/watch/namespaces/{namespace}/rolebindings/{name} Table 4.30. Global path parameters Parameter Type Description name string name of the RoleBinding namespace string object name and auth scope, such as for teams and projects Table 4.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind RoleBinding. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/rbac_apis/rolebinding-rbac-authorization-k8s-io-v1 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.