title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 1. Image APIs
|
Chapter 1. Image APIs 1.1. Image [image.openshift.io/v1] Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. ImageSignature [image.openshift.io/v1] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ImageStreamImage [image.openshift.io/v1] Description ImageStreamImage represents an Image that is retrieved by image name from an ImageStream. User interfaces and regular users can use this resource to access the metadata details of a tagged image in the image stream history for viewing, since Image resources are not directly accessible to end users. A not found error will be returned if no such image is referenced by a tag within the ImageStream. Images are created when spec tags are set on an image stream that represent an image in an external registry, when pushing to the integrated registry, or when tagging an existing image from one image stream to another. The name of an image stream image is in the form "<STREAM>@<DIGEST>", where the digest is the content addressible identifier for the image (sha256:xxxxx... ). You can use ImageStreamImages as the from.kind of an image stream spec tag to reference an image exactly. The only operations supported on the imagestreamimage endpoint are retrieving the image. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. ImageStreamImport [image.openshift.io/v1] Description The image stream import resource provides an easy way for a user to find and import container images from other container image registries into the server. Individual images or an entire image repository may be imported, and users may choose to see the results of the import prior to tagging the resulting images into the specified image stream. This API is intended for end-user tools that need to see the metadata of the image prior to import (for instance, to generate an application from it). Clients that know the desired image can continue to create spec.tags directly into their image streams. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. ImageStreamLayers [image.openshift.io/v1] Description ImageStreamLayers describes information about the layers referenced by images in this image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. ImageStreamMapping [image.openshift.io/v1] Description ImageStreamMapping represents a mapping from a single image stream tag to a container image as well as the reference to the container image stream the image came from. This resource is used by privileged integrators to create an image resource and to associate it with an image stream in the status tags field. Creating an ImageStreamMapping will allow any user who can view the image stream to tag or pull that image, so only create mappings where the user has proven they have access to the image contents directly. The only operation supported for this resource is create and the metadata name and namespace should be set to the image stream containing the tag that should be updated. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. ImageStream [image.openshift.io/v1] Description An ImageStream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a container image repository on a registry. Users typically update the spec.tags field to point to external images which are imported from container registries using credentials in your namespace with the pull secret type, or to existing image stream tags and images which are immediately accessible for tagging or pulling. The history of images applied to a tag is visible in the status.tags field and any user who can view an image stream is allowed to tag that image into their own image streams. Access to pull images from the integrated registry is granted by having the "get imagestreams/layers" permission on a given image stream. Users may remove a tag by deleting the imagestreamtag resource, which causes both spec and status for that tag to be removed. Image stream history is retained until an administrator runs the prune operation, which removes references that are no longer in use. To preserve a historical image, ensure there is a tag in spec pointing to that image by its digest. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.8. ImageStreamTag [image.openshift.io/v1] Description ImageStreamTag represents an Image that is retrieved by tag name from an ImageStream. Use this resource to interact with the tags and images in an image stream by tag, or to see the image details for a particular tag. The image associated with this resource is the most recently successfully tagged, imported, or pushed image (as described in the image stream status.tags.items list for this tag). If an import is in progress or has failed the image will be shown. Deleting an image stream tag clears both the status and spec fields of an image stream. If no image can be retrieved for a given tag, a not found error will be returned. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.9. ImageTag [image.openshift.io/v1] Description ImageTag represents a single tag within an image stream and includes the spec, the status history, and the currently referenced image (if any) of the provided tag. This type replaces the ImageStreamTag by providing a full view of the tag. ImageTags are returned for every spec or status tag present on the image stream. If no tag exists in either form a not found error will be returned by the API. A create operation will succeed if no spec tag has already been defined and the spec field is set. Delete will remove both spec and status elements from the image stream. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.10. SecretList [image.openshift.io/v1] Description SecretList is a list of Secret. Type object
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/image_apis/image-apis
|
Chapter 2. Embedded caches
|
Chapter 2. Embedded caches Add Data Grid as a dependency to your Java project and use embedded caches that increase application performance and give you capabilities to handle complex use cases. 2.1. Embedded cache tutorials You can run embedded cache tutorials directly in your IDE or from the command line as follows: USD ./mvnw -s /path/to/maven-settings.xml clean package exec:exec Tutorial link Description Distributed caches Demonstrates how Distributed Caches work. Replicated caches Demonstrates how Replicated Caches work. Invalidated caches Demonstrates how Invalidated Caches work. Transactions Demonstrates how transactions work. Streams Demonstrates how Distributed Streams work. JCache integration Demonstrates how JCache works. Functional Maps Demonstrates how Functional Map API works. Map API Demonstrates how the Map API works with Data Grid caches. Multimap Demonstrates how to use Multimap. Queries Uses Data Grid Query to perform full-text queries on cache values. Clustered Listeners Detects when data changes in an embedded cache with Clustered Listeners. Counters Demonstrates how to use an embedded Clustered Counter. Clustered Locks Demonstrates how to use an embedded Clustered Lock. Clustered execution Demonstrates how to use an embedded Clustered Counter. Data Grid documentation You can find more resources about embedded caches in our documentation at: Embedding Data Grid Caches Querying Data Grid caches 2.2. Kubernetes and Openshift tutorial This tutorial contains instructions on how to run Infinispan library mode (as a microservice) in Kubernetes/OpenShift. Prerequisites: Maven and Docker daemon running in the background. Prerequisites A running Openshift or Kubernetes cluster Building the tutorial This tutorial is built using the maven command: ./mvnw package Note that target/ directory contains additional directories like docker (with generated Dockerfile) and classes/META-INF/jkube with Kubernetes and OpenShift deployment templates. Tip If the Docker Daemon is down, the build will omit processing Dockerfiles. Use docker profile to turn it on manually. Deploying the tutorial to Kubernetes This is handle by the JKube maven plugin, just invoke: mvn k8s:build k8s:push k8s:resource k8s:apply -Doptions.image=<IMAGE_NAME> 1 1 IMAGE_NAME must be replaced with the FQN of the container to deploy to Kubernetes. This container must be created in a repository that you have permissions to push to and is accessible from within your Kubernetes cluster. Viewing and scaling up Everything should be up and running at this point. Now login into the OpenShift or Kubernetes cluster and scale the application kubectl scale --replicas=3 deployment/USD(kubectl get rs --namespace=myproject | grep infinispan | awk '{print USD1}') --namespace=myproject Undeploying the tutorial This is handled by the JKube maven plugin, just invoke:
|
[
"./mvnw -s /path/to/maven-settings.xml clean package exec:exec",
"./mvnw package",
"mvn k8s:build k8s:push k8s:resource k8s:apply -Doptions.image=<IMAGE_NAME> 1",
"scale --replicas=3 deployment/USD(kubectl get rs --namespace=myproject | grep infinispan | awk '{print USD1}') --namespace=myproject",
"mvn k8s:undeploy"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_code_tutorials/embedded-tutorials
|
10.2. Bash (Bourne-Again Shell)
|
10.2. Bash (Bourne-Again Shell) Red Hat Enterprise Linux 6 includes version 4.1 of Bash as its default shell. This section describes the compatibility issues that this version introduces over versions. Bash-4.0 and later now allows process substitution constructs to pass unchanged through brace expansion, so any expansion of the contents will have to be separately specified, and each process substitution will have to be separately entered. Bash-4.0 and later now allows SIGCHLD to interrupt the wait builtin, as Posix specifies, so the SIGCHLD trap is no longer always invoked once per exiting child if you are using `wait' to wait for all children. Since Bash-4.0 and later now follows Posix rules for finding the closing delimiter of a USD() command substitution, it will not behave as versions did, but will catch more syntax and parsing errors before spawning a subshell to evaluate the command substitution. The programmable completion code uses the same set of delimiting characters as readline when breaking the command line into words, rather than the set of shell metacharacters, so programmable completion and readline will be more consistent. When the read builtin times out, it attempts to assign any input read to specified variables, which also causes variables to be set to the empty string if there is not enough input. versions discarded the characters read. In Bash-4.0 and later, when one of the commands in a pipeline is killed by a SIGINT while executing a command list, the shell acts as if it received the interrupt. Bash-4.0 and later versions change the handling of the set -e option so that the shell exits if a pipeline fails (and not just if the last command in the failing pipeline is a simple command). This is not as Posix specifies. There is work underway to update this portion of the standard; the Bash-4.0 behavior attempts to capture the consensus at the time of release. Bash-4.0 and later fixes a Posix mode bug that caused the . (source) builtin to search the current directory for its filename argument, even if "." is not in the system PATH. Posix says that the shell should not look in the PWD variable in this case. Bash-4.1 uses the current locale when comparing strings using operators to the [[ command. This can be reverted to the behavior by setting one of the compatNN shopt options. 10.2.1. Regular Expressions Further to the points already listed, quoting the pattern argument to the regular expression matching conditional operator =~ can cause regexp matching to stop working. This occurs on all architectures. In versions of bash prior to 3.2, the effect of quoting the regular expression argument to the [[ command's =~ operator was not specified. The practical effect was that double-quoting the pattern argument required backslashes to quote special pattern characters, which interfered with the backslash processing performed by double-quoted word expansion and was inconsistent with how the == shell pattern matching operator treated quoted characters. In bash version 3.2, the shell was changed to internally quote characters in single- and double-quoted string arguments to the =~ operator, which suppresses the special meaning of the characters that are important to regular expression processing (`.', `[', `\', `(', `), `*', `+', `?', `{', `|', `^', and `USD') and forces them to be matched literally. This is consistent with how the == pattern matching operator treats quoted portions of its pattern argument. Since the treatment of quoted string arguments was changed, several issues have arisen, chief among them the problem of white space in pattern arguments and the differing treatment of quoted strings between bash 3.1 and bash 3.2. Both problems can be solved by using a shell variable to hold the pattern. Since word splitting is not performed when expanding shell variables in all operands of the [[ command, this provides the ability to quote patterns as you wish when assigning the variable, then expand the values to a single string that can contain whitespace. The first problem is solved by using backslashes or any other quoting mechanism to escape the white space in the patterns. Bash 4.0 introduces the concept of a compatibility level , controlled by several options to the shopt builtin. If the compat31 option is enabled, bash will revert to the 3.1 behavior with respect to quoting the right-hand side of the =~ operator.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-package_changes-bash
|
Chapter 9. Importers
|
Chapter 9. Importers 9.1. Importers The Import Wizard provides a means to create a model based on the structure of a data source, to convert existing metadata (that is, WSDL or XML Schema) into a source model or to load existing metadata files into the current VDB. To launch the Import Wizard, select the File > Import action or select a project, folder or model in the tree and right-click select Import... Figure 9.1. Import Wizard
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/chap-importers
|
Chapter 1. Preparing to install on Azure Stack Hub
|
Chapter 1. Preparing to install on Azure Stack Hub 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You have installed Azure Stack Hub version 2008 or later. 1.2. Requirements for installing OpenShift Container Platform on Azure Stack Hub Before installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must configure an Azure account. See Configuring an Azure Stack Hub account for details about account configuration, account limits, DNS zone configuration, required roles, and creating service principals. 1.3. Choosing a method to install OpenShift Container Platform on Azure Stack Hub You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure : You can install OpenShift Container Platform on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that you provision, by using the following method: Installing a cluster on Azure Stack Hub using ARM templates : You can install OpenShift Container Platform on Azure Stack Hub by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 1.4. steps Configuring an Azure Stack Hub account
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_azure_stack_hub/preparing-to-install-on-azure-stack-hub
|
Chapter 12. Nested Virtualization
|
Chapter 12. Nested Virtualization 12.1. Overview As of Red Hat Enterprise Linux 7.5, nested virtualization is available as a Technology Preview for KVM guest virtual machines. With this feature, a guest virtual machine (also referred to as level 1 or L1 ) that runs on a physical host ( level 0 or L0 ) can act as a hypervisor, and create its own guest virtual machines ( L2 ). Nested virtualization is useful in a variety of scenarios, such as debugging hypervisors in a constrained environment and testing larger virtual deployments on a limited amount of physical resources. However, note that nested virtualization is not supported or recommended in production user environments, and is primarily intended for development and testing. Nested virtualization relies on host virtualization extensions to function, and it should not be confused with running guests in a virtual environment using the QEMU Tiny Code Generator (TCG) emulation, which is not supported in Red Hat Enterprise Linux.
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/Nested_Virt
|
Preface
|
Preface Welcome to the Red Hat JBoss Core Services version 2.4.57 Service Pack 5 release. Red Hat JBoss Core Services Apache HTTP Server is an open source web server developed by the Apache Software Foundation . The Apache HTTP Server includes the following features: Implements the current HTTP standards, including HTTP/1.1 and HTTP/2. Supports Transport Layer Security (TLS) encryption through OpenSSL , which provides secure connections between the web server and web clients. Supports extensible functionality through the use of modules, some of which are included with the Red Hat JBoss Core Services Apache HTTP Server. This release of Red Hat JBoss Core Services includes some important security updates.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_5_release_notes/pr01
|
Chapter 2. Preparing to deploy storage cluster with disaster recovery enabled
|
Chapter 2. Preparing to deploy storage cluster with disaster recovery enabled 2.1. Requirements for enabling Metro-DR Ensure that you have at least three OpenShift Container Platform master nodes in three different zones. One master node in each of the three zones. Ensure that you have at least four OpenShift Container Platform worker nodes evenly distributed across the two Data Zones. For stretch cluster on bare metal, use the SSD drive as the root drive for OpenShift Container Platform master nodes. Ensure that each node is pre-labeled with its zone label. For more information, see the Applying topology zone labels to OpenShift Container Platform node section. The Metro-DR solution is designed for deployments where latencies do not exceed 2 ms between zones, with a maximum round-trip time (RTT) of 4 ms. Contact Red Hat Customer Support if you are planning to deploy with higher latencies. Note Flexible scaling and Arbiter both cannot be enabled at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas in an Arbiter cluster, you need to add at least one node in each of the two data zones. 2.2. Applying topology zone labels to OpenShift Container Platform nodes During a site outage, the zone that has the arbiter function makes use of the arbiter label. These labels are arbitrary and must be unique for the three locations. For example, you can label the nodes as follows: To apply the labels to the node: <NODENAME> Is the name of the node <LABEL> Is the topology zone label To validate the labels using the example labels for the three zones: <LABEL> Is the topology zone label Alternatively, you can run a single command to see all the nodes with it's zone. The Metro-DR stretch cluster topology zone labels are now applied to the appropriate OpenShift Container Platform nodes to define the three locations. step Install the storage operators from OpenShift Container Platform OperatorHub . 2.3. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update channel as either 4.9 or stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.4. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least four worker nodes evenly distributed across two data centers in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see Planning your deployment . Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in command-line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to search for the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you selected Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. steps Create an OpenShift Data Foundation cluster .
|
[
"topology.kubernetes.io/zone=arbiter for Master0 topology.kubernetes.io/zone=datacenter1 for Master1, Worker1, Worker2 topology.kubernetes.io/zone=datacenter2 for Master2, Worker3, Worker4",
"oc label node <NODENAME> topology.kubernetes.io/zone= <LABEL>",
"oc get nodes -l topology.kubernetes.io/zone= <LABEL> -o name",
"oc get nodes -L topology.kubernetes.io/zone",
"oc annotate namespace openshift-storage openshift.io/node-selector="
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/configuring_openshift_data_foundation_for_metro-dr_stretch_cluster/preparing_to_deploy_storage_cluster_with_disaster_recovery_enabled
|
5.361. xmlrpc-c
|
5.361. xmlrpc-c 5.361.1. RHBA-2012:0954 - xmlrpc-c bug fix update Updated xmlrpc-c packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The xmlrpc-c packages provide a network protocol to allow a client program to make a simple RPC (remote procedure call) over the Internet. It converts an RPC into an XML document, sends it to a remote server using HTTP, and gets back the response in XML. Bug Fixes BZ# 653702 Prior to this update, the "xmlrpc-c-config client --libs" command returned unprocessed output, making it difficult to discern important information from it. This bug has been fixed and the output of the command is now properly pre-processed by the autoconf utility. BZ# 741641 A memory leak was discovered in the xmlrpc-c library by the valgrind utility. A patch has been provided to address this bug and the memory leak no longer occurs. Users of xmlrpc-c are advised to upgrade to these updated packages, which fix these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/xmlrpc-c
|
Service Mesh
|
Service Mesh OpenShift Container Platform 4.13 Service Mesh installation, usage, and release notes Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/service_mesh/index
|
Release Notes
|
Release Notes Red Hat build of Keycloak 24.0 Red Hat Customer Content Services
|
[
"bin/kc.[sh|bat] start --spi-brute-force-protector-default-brute-force-detector-allow-concurrent-requests=true",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: resources: requests: cpu: 1200m memory: 896Mi limits: cpu: 6 memory: 3Gi",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: cache: configMapFile: name: my-configmap key: config.xml",
"spec: truststores: mystore: secret: name: mystore-secret myotherstore: secret: name: myotherstore-secret"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html-single/release_notes/index
|
14.6.5. Sending a Keystroke Combination to a Specified Domain
|
14.6.5. Sending a Keystroke Combination to a Specified Domain Using the virsh send-key domain --codeset --holdtime keycode command you can send a sequence as a keycode to a specific domain. Each keycode can either be a numeric value or a symbolic name from the corresponding codeset . If multiple keycodes are specified, thay are all sent simultaneously to the guest virtual machine and as such may be received in random order. If you need distinct keycodes, you must send the send-key command multiple times. If a --holdtime is given, each keystroke will be held for the specified amount in milliseconds. The --codeset allows you to specify a code set, the default being Linux, but the following options are permitted: linux - choosing this option causes the symbolic names to match the corresponding Linux key constant macro names and the numeric values are those offered by the Linux generic input event subsystems. xt - this will send a value that is defined by the XT keyboard controller. No symbolic names are provided. atset1 - the numeric values are those that are defined by the AT keyboard controller, set1 (XT compatible set). Extended keycodes from the atset1 may differ from extended keycodes in the XT codeset. No symbolic names are provided. atset2 - The numeric values are those defined by the AT keyboard controller, set 2. No symbolic names are provided. atset3 - The numeric values are those defined by the AT keyboard controller, set 3 (PS/2 compatible). No symbolic names are provided. os_x - The numeric values are those defined by the OS-X keyboard input subsystem. The symbolic names match the corresponding OS-X key constant macro names. xt_kbd - The numeric values are those defined by the Linux KBD device. These are a variant on the original XT codeset, but often with different encoding for extended keycodes. No symbolic names are provided. win32 - The numeric values are those defined by the Win32 keyboard input subsystem. The symbolic names match the corresponding Win32 key constant macro names. usb - The numeric values are those defined by the USB HID specification for keyboard input. No symbolic names are provided. rfb - The numeric values are those defined by the RFB extension for sending raw keycodes. These are a variant on the XT codeset, but extended keycodes have the low bit of the second bite set, instead of the high bit of the first byte. No symbolic names are provided.
|
[
"virsh send-key rhel6 --holdtime 1000 0xf"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-editing_a_guest_virtual_machines_configuration_file-sending_keystoke_combinations_to_a_specified_domain
|
Chapter 14. Managing instances
|
Chapter 14. Managing instances As a cloud administrator, you can monitor and manage the instances running on your cloud. 14.1. Securing connections to the VNC console of an instance You can secure connections to the VNC console for an instance by configuring the allowed TLS ciphers and the minimum protocol version to enforce for incoming client connections to the VNC proxy service. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Open your Compute environment file. Configure the minimum protocol version to use for VNC console connections to instances: Replace <version> with the minimum allowed SSL/TLS protocol version. Set to one of the following valid values: default : Uses the underlying system OpenSSL defaults. tlsv1_1 : Use if you have clients that do not support a later version. Note TLS 1.0 and TLS 1.1 are deprecated in RHEL 8, and not supported in RHEL 9. tlsv1_2 : Use if you want to configure the SSL/TLS ciphers to use for VNC console connections to instances. tlsv1_3 : Use if you want to use the standard cipher library for TLSv1.3. Configuration of the NovaVNCProxySSLCiphers parameter is ignored. If you set the minimum allowed SSL/TLS protocol version to tlsv1_2 , then configure the SSL/TLS ciphers to use for VNC console connections to instances: Replace <ciphers> with a colon-delimited list of the cipher suites to allow. Retrieve the list of available ciphers from openssl . Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 14.2. Database cleaning The Compute service includes an administrative tool, nova-manage , that you can use to perform deployment, upgrade, clean-up, and maintenance-related tasks, such as applying database schemas, performing online data migrations during an upgrade, and managing and cleaning up the database. Director automates the following database management tasks on the overcloud by using cron: Archives deleted instance records by moving the deleted rows from the production tables to shadow tables. Purges deleted rows from the shadow tables after archiving is complete. 14.2.1. Configuring database management The cron jobs use default settings to perform database management tasks. By default, the database archiving cron jobs run daily at 00:01, and the database purging cron jobs run daily at 05:00, both with a jitter between 0 and 3600 seconds. You can modify these settings as required by using heat parameters. Procedure Open your Compute environment file. Add the heat parameter that controls the cron job that you want to add or modify. For example, to purge the shadow tables immediately after they are archived, set the following parameter to "True": For a complete list of the heat parameters to manage database cron jobs, see Configuration options for the Compute service automated database management . Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 14.2.2. Configuration options for the Compute service automated database management Use the following heat parameters to enable and modify the automated cron jobs that manage the database. Table 14.1. Compute (nova) service cron parameters Parameter Description NovaCronArchiveDeleteAllCells Set this parameter to "True" to archive deleted instance records from all cells. Default: True NovaCronArchiveDeleteRowsAge Use this parameter to archive deleted instance records based on their age in days. Set to 0 to archive data older than today in shadow tables. Default: 90 NovaCronArchiveDeleteRowsDestination Use this parameter to configure the file for logging deleted instance records. Default: /var/log/nova/nova-rowsflush.log NovaCronArchiveDeleteRowsHour Use this parameter to configure the hour at which to run the cron command to move deleted instance records to another table. Default: 0 NovaCronArchiveDeleteRowsMaxDelay Use this parameter to configure the maximum delay, in seconds, before moving deleted instance records to another table. Default: 3600 NovaCronArchiveDeleteRowsMaxRows Use this parameter to configure the maximum number of deleted instance records that can be moved to another table. Default: 1000 NovaCronArchiveDeleteRowsMinute Use this parameter to configure the minute past the hour at which to run the cron command to move deleted instance records to another table. Default: 1 NovaCronArchiveDeleteRowsMonthday Use this parameter to configure on which day of the month to run the cron command to move deleted instance records to another table. Default: * (every day) NovaCronArchiveDeleteRowsMonth Use this parameter to configure in which month to run the cron command to move deleted instance records to another table. Default: * (every month) NovaCronArchiveDeleteRowsPurge Set this parameter to "True" to purge shadow tables immediately after scheduled archiving. Default: False NovaCronArchiveDeleteRowsUntilComplete Set this parameter to "True" to continue to move deleted instance records to another table until all records are moved. Default: True NovaCronArchiveDeleteRowsUser Use this parameter to configure the user that owns the crontab that archives deleted instance records and that has access to the log file the crontab uses. Default: nova NovaCronArchiveDeleteRowsWeekday Use this parameter to configure on which day of the week to run the cron command to move deleted instance records to another table. Default: * (every day) NovaCronPurgeShadowTablesAge Use this parameter to purge shadow tables based on their age in days. Set to 0 to purge shadow tables older than today. Default: 14 NovaCronPurgeShadowTablesAllCells Set this parameter to "True" to purge shadow tables from all cells. Default: True NovaCronPurgeShadowTablesDestination Use this parameter to configure the file for logging purged shadow tables. Default: /var/log/nova/nova-rowspurge.log NovaCronPurgeShadowTablesHour Use this parameter to configure the hour at which to run the cron command to purge shadow tables. Default: 5 NovaCronPurgeShadowTablesMaxDelay Use this parameter to configure the maximum delay, in seconds, before purging shadow tables. Default: 3600 NovaCronPurgeShadowTablesMinute Use this parameter to configure the minute past the hour at which to run the cron command to purge shadow tables. Default: 0 NovaCronPurgeShadowTablesMonth Use this parameter to configure in which month to run the cron command to purge the shadow tables. Default: * (every month) NovaCronPurgeShadowTablesMonthday Use this parameter to configure on which day of the month to run the cron command to purge the shadow tables. Default: * (every day) NovaCronPurgeShadowTablesUser Use this parameter to configure the user that owns the crontab that purges the shadow tables and that has access to the log file the crontab uses. Default: nova NovaCronPurgeShadowTablesVerbose Use this parameter to enable verbose logging in the log file for purged shadow tables. Default: False NovaCronPurgeShadowTablesWeekday Use this parameter to configure on which day of the week to run the cron command to purge the shadow tables. Default: * (every day) 14.3. Migrating virtual machine instances between Compute nodes You sometimes need to migrate instances from one Compute node to another Compute node in the overcloud, to perform maintenance, rebalance the workload, or replace a failed or failing node. Compute node maintenance If you need to temporarily take a Compute node out of service, for instance, to perform hardware maintenance or repair, kernel upgrades and software updates, you can migrate instances running on the Compute node to another Compute node. Failing Compute node If a Compute node is about to fail and you need to service it or replace it, you can migrate instances from the failing Compute node to a healthy Compute node. Failed Compute nodes If a Compute node has already failed, you can evacuate the instances. You can rebuild instances from the original image on another Compute node, using the same name, UUID, network addresses, and any other allocated resources the instance had before the Compute node failed. Workload rebalancing You can migrate one or more instances to another Compute node to rebalance the workload. For example, you can consolidate instances on a Compute node to conserve power, migrate instances to a Compute node that is physically closer to other networked resources to reduce latency, or distribute instances across Compute nodes to avoid hot spots and increase resiliency. Director configures all Compute nodes to provide secure migration. All Compute nodes also require a shared SSH key to provide the users of each host with access to other Compute nodes during the migration process. Director creates this key using the OS::TripleO::Services::NovaCompute composable service. This composable service is one of the main services included on all Compute roles by default. For more information, see Composable Services and Custom Roles in the Director Installation and Usage guide. Note If you have a functioning Compute node, and you want to make a copy of an instance for backup purposes, or to copy the instance to a different environment, follow the procedure in Importing virtual machines into the overcloud in the Director Installation and Usage guide. 14.3.1. Migration types Red Hat OpenStack Platform (RHOSP) supports the following types of migration. Cold migration Cold migration, or non-live migration, involves shutting down a running instance before migrating it from the source Compute node to the destination Compute node. Cold migration involves some downtime for the instance. The migrated instance maintains access to the same volumes and IP addresses. Note Cold migration requires that both the source and destination Compute nodes are running. Live migration Live migration involves moving the instance from the source Compute node to the destination Compute node without shutting it down, and while maintaining state consistency. Live migrating an instance involves little or no perceptible downtime. However, live migration does impact performance for the duration of the migration operation. Therefore, instances should be taken out of the critical path while being migrated. Important Live migration impacts the performance of the workload being moved. Red Hat does not provide support for increased packet loss, network latency, memory latency or a reduction in network bandwith, memory bandwidth, storage IO, or CPU peformance during live migration. Note Live migration requires that both the source and destination Compute nodes are running. In some cases, instances cannot use live migration. For more information, see Migration constraints . Evacuation If you need to migrate instances because the source Compute node has already failed, you can evacuate the instances. 14.3.2. Migration constraints Migration constraints typically arise with block migration, configuration disks, or when one or more instances access physical hardware on the Compute node. CPU constraints The source and destination Compute nodes must have the same CPU architecture. For example, Red Hat does not support migrating an instance from a ppc64le CPU to a x86_64 CPU. Migration between different CPU models is not supported. In some cases, the CPU of the source and destination Compute node must match exactly, such as instances that use CPU host passthrough. In all cases, the CPU features of the destination node must be a superset of the CPU features on the source node. Memory constraints The destination Compute node must have sufficient available RAM. Memory oversubscription can cause migration to fail. Block migration constraints Migrating instances that use disks that are stored locally on a Compute node takes significantly longer than migrating volume-backed instances that use shared storage, such as Red Hat Ceph Storage. This latency arises because OpenStack Compute (nova) migrates local disks block-by-block between the Compute nodes over the control plane network by default. By contrast, volume-backed instances that use shared storage, such as Red Hat Ceph Storage, do not have to migrate the volumes, because each Compute node already has access to the shared storage. Note Network congestion in the control plane network caused by migrating local disks or instances that consume large amounts of RAM might impact the performance of other systems that use the control plane network, such as RabbitMQ. Read-only drive migration constraints Migrating a drive is supported only if the drive has both read and write capabilities. For example, OpenStack Compute (nova) cannot migrate a CD-ROM drive or a read-only config drive. However, OpenStack Compute (nova) can migrate a drive with both read and write capabilities, including a config drive with a drive format such as vfat . Live migration constraints In some cases, live migrating instances involves additional constraints. Important Live migration impacts the performance of the workload being moved. Red Hat does not provide support for increased packet loss, network latency, memory latency or a reduction in network bandwidth, memory bandwidth, storage IO, or CPU performance during live migration. No new operations during migration To achieve state consistency between the copies of the instance on the source and destination nodes, RHOSP must prevent new operations during live migration. Otherwise, live migration might take a long time or potentially never end if writes to memory occur faster than live migration can replicate the state of the memory. CPU pinning with NUMA The NovaSchedulerEnabledFilters parameter in the Compute configuration must include the values AggregateInstanceExtraSpecsFilter and NUMATopologyFilter . Multi-cell clouds In a multi-cell cloud, you can live migrate instances to a different host in the same cell, but not across cells. Floating instances When live migrating floating instances, if the configuration of NovaComputeCpuSharedSet on the destination Compute node is different from the configuration of NovaComputeCpuSharedSet on the source Compute node, the instances will not be allocated to the CPUs configured for shared (unpinned) instances on the destination Compute node. Therefore, if you need to live migrate floating instances, you must configure all the Compute nodes with the same CPU mappings for dedicated (pinned) and shared (unpinned) instances, or use a host aggregate for the shared instances. Destination Compute node capacity The destination Compute node must have sufficient capacity to host the instance that you want to migrate. SR-IOV live migration Instances with SR-IOV-based network interfaces can be live migrated. Live migrating instances with direct mode SR-IOV network interfaces incurs network downtime. This is because the direct mode interfaces need to be detached and re-attached during the migration. Packet loss on ML2/OVN deployments ML2/OVN does not support live migration without packet loss. This is because OVN cannot handle multiple port bindings and therefore does not know when a port is being migrated. To minimize packet loss during live migration, configure your ML2/OVN deployment to announce the instance on the destination host once migration is complete: Live migration on ML2/OVS deployments To minimize packet loss when live migrating instances in an ML2/OVS deployment, configure your ML2/OVS deployment to enable the Networking service (neutron) live migration events, and announce the instance on the destination host once migration is complete: Constraints that preclude live migration You cannot live migrate an instance that uses the following features. PCI passthrough QEMU/KVM hypervisors support attaching PCI devices on the Compute node to an instance. Use PCI passthrough to give an instance exclusive access to PCI devices, which appear and behave as if they are physically attached to the operating system of the instance. However, because PCI passthrough involves direct access to the physical devices, QEMU/KVM does not support live migration of instances using PCI passthrough. Port resource requests You cannot live migrate an instance that uses a port that has resource requests, such as a guaranteed minimum bandwidth QoS policy. Use the following command to check if a port has resource requests: 14.3.3. Preparing to migrate Before you migrate one or more instances, you need to determine the Compute node names and the IDs of the instances to migrate. Procedure Identify the source Compute node host name and the destination Compute node host name: List the instances on the source Compute node and locate the ID of the instance or instances that you want to migrate: Replace <source> with the name or ID of the source Compute node. Optional: If you are migrating instances from a source Compute node to perform maintenance on the node, you must disable the node to prevent the scheduler from assigning new instances to the node during maintenance: Replace <source> with the host name of the source Compute node. You are now ready to perform the migration. Follow the required procedure detailed in Cold migrating an instance or Live migrating an instance . 14.3.4. Cold migrating an instance Cold migrating an instance involves stopping the instance and moving it to another Compute node. Cold migration facilitates migration scenarios that live migrating cannot facilitate, such as migrating instances that use PCI passthrough. The scheduler automatically selects the destination Compute node. For more information, see Migration constraints . Procedure To cold migrate an instance, enter the following command to power off and move the instance: Replace <instance> with the name or ID of the instance to migrate. Specify the --block-migration flag if migrating a locally stored volume. Wait for migration to complete. While you wait for the instance migration to complete, you can check the migration status. For more information, see Checking migration status . Check the status of the instance: A status of "VERIFY_RESIZE" indicates you need to confirm or revert the migration: If the migration worked as expected, confirm it: Replace <instance> with the name or ID of the instance to migrate. A status of "ACTIVE" indicates that the instance is ready to use. If the migration did not work as expected, revert it: Replace <instance> with the name or ID of the instance. Restart the instance: Replace <instance> with the name or ID of the instance. Optional: If you disabled the source Compute node for maintenance, you must re-enable the node so that new instances can be assigned to it: Replace <source> with the host name of the source Compute node. 14.3.5. Live migrating an instance Live migration moves an instance from a source Compute node to a destination Compute node with a minimal amount of downtime. Live migration might not be appropriate for all instances. For more information, see Migration constraints . Procedure To live migrate an instance, specify the instance and the destination Compute node: Replace <instance> with the name or ID of the instance. Replace <dest> with the name or ID of the destination Compute node. Note The openstack server migrate command covers migrating instances with shared storage, which is the default. Specify the --block-migration flag to migrate a locally stored volume: Confirm that the instance is migrating: Wait for migration to complete. While you wait for the instance migration to complete, you can check the migration status. For more information, see Checking migration status . Check the status of the instance to confirm if the migration was successful: Replace <dest> with the name or ID of the destination Compute node. Optional: If you disabled the source Compute node for maintenance, you must re-enable the node so that new instances can be assigned to it: Replace <source> with the host name of the source Compute node. 14.3.6. Checking migration status Migration involves several state transitions before migration is complete. During a healthy migration, the migration state typically transitions as follows: Queued: The Compute service has accepted the request to migrate an instance, and migration is pending. Preparing: The Compute service is preparing to migrate the instance. Running: The Compute service is migrating the instance. Post-migrating: The Compute service has built the instance on the destination Compute node and is releasing resources on the source Compute node. Completed: The Compute service has completed migrating the instance and finished releasing resources on the source Compute node. Procedure Retrieve the list of migration IDs for the instance: Replace <instance> with the name or ID of the instance. Show the status of the migration: Replace <instance> with the name or ID of the instance. Replace <migration_id> with the ID of the migration. Running the nova server-migration-show command returns the following example output: Tip The OpenStack Compute service measures progress of the migration by the number of remaining memory bytes to copy. If this number does not decrease over time, the migration might be unable to complete, and the Compute service might abort it. Sometimes instance migration can take a long time or encounter errors. For more information, see Troubleshooting migration . 14.3.7. Evacuating an instance If you want to move an instance from a dead or shut-down Compute node to a new host in the same environment, you can evacuate it. The evacuate process destroys the original instance and rebuilds it on another Compute node using the original image, instance name, UUID, network addresses, and any other resources the original instance had allocated to it. If the instance uses shared storage, the instance root disk is not rebuilt during the evacuate process, as the disk remains accessible by the destination Compute node. If the instance does not use shared storage, then the instance root disk is also rebuilt on the destination Compute node. Note You can only perform an evacuation when the Compute node is fenced, and the API reports that the state of the Compute node is "down" or "forced-down". If the Compute node is not reported as "down" or "forced-down", the evacuate command fails. To perform an evacuation, you must be a cloud administrator. 14.3.7.1. Evacuating one instance You can evacuate instances one at a time. Procedure Confirm that the instance is not running: Replace <node> with the name or UUID of the Compute node that hosts the instance. Confirm that the host Compute node is fenced or shut down: Replace <node> with the name or UUID of the Compute node that hosts the instance to evacuate. To perform an evacuation, the Compute node must have a status of down or forced-down . Disable the Compute node: Replace <node> with the name of the Compute node to evacuate the instance from. Replace <disable_host_reason> with details about why you disabled the Compute node. Evacuate the instance: Optional: Replace <pass> with the administrative password required to access the evacuated instance. If a password is not specified, a random password is generated and output when the evacuation is complete. Note The password is changed only when ephemeral instance disks are stored on the local hypervisor disk. The password is not changed if the instance is hosted on shared storage or has a Block Storage volume attached, and no error message is displayed to inform you that the password was not changed. Replace <instance> with the name or ID of the instance to evacuate. Optional: Replace <dest> with the name of the Compute node to evacuate the instance to. If you do not specify the destination Compute node, the Compute scheduler selects one for you. You can find possible Compute nodes by using the following command: Optional: Enable the Compute node when it is recovered: Replace <node> with the name of the Compute node to enable. 14.3.7.2. Evacuating all instances on a host You can evacuate all instances on a specified Compute node. Procedure Confirm that the instances to evacuate are not running: Replace <node> with the name or UUID of the Compute node that hosts the instances to evacuate. Confirm that the host Compute node is fenced or shut down: Replace <node> with the name or UUID of the Compute node that hosts the instances to evacuate. To perform an evacuation, the Compute node must have a status of down or forced-down . Disable the Compute node: Replace <node> with the name of the Compute node to evacuate the instances from. Replace <disable_host_reason> with details about why you disabled the Compute node. Evacuate all instances on a specified Compute node: Optional: Replace <dest> with the name of the destination Compute node to evacuate the instances to. If you do not specify the destination, the Compute scheduler selects one for you. You can find possible Compute nodes by using the following command: Replace <node> with the name of the Compute node to evacuate the instances from. Optional: Enable the Compute node when it is recovered: Replace <node> with the name of the Compute node to enable. 14.3.8. Troubleshooting migration The following issues can arise during instance migration: The migration process encounters errors. The migration process never ends. Performance of the instance degrades after migration. 14.3.8.1. Errors during migration The following issues can send the migration operation into an error state: Running a cluster with different versions of Red Hat OpenStack Platform (RHOSP). Specifying an instance ID that cannot be found. The instance you are trying to migrate is in an error state. The Compute service is shutting down. A race condition occurs. Live migration enters a failed state. When live migration enters a failed state, it is typically followed by an error state. The following common issues can cause a failed state: A destination Compute host is not available. A scheduler exception occurs. The rebuild process fails due to insufficient computing resources. A server group check fails. The instance on the source Compute node gets deleted before migration to the destination Compute node is complete. 14.3.8.2. Never-ending live migration Live migration can fail to complete, which leaves migration in a perpetual running state. A common reason for a live migration that never completes is that client requests to the instance running on the source Compute node create changes that occur faster than the Compute service can replicate them to the destination Compute node. Use one of the following methods to address this situation: Abort the live migration. Force the live migration to complete. Aborting live migration If the instance state changes faster than the migration procedure can copy it to the destination node, and you do not want to temporarily suspend the instance operations, you can abort the live migration. Procedure Retrieve the list of migrations for the instance: Replace <instance> with the name or ID of the instance. Abort the live migration: Replace <instance> with the name or ID of the instance. Replace <migration_id> with the ID of the migration. Forcing live migration to complete If the instance state changes faster than the migration procedure can copy it to the destination node, and you want to temporarily suspend the instance operations to force migration to complete, you can force the live migration procedure to complete. Important Forcing live migration to complete might lead to perceptible downtime. Procedure Retrieve the list of migrations for the instance: Replace <instance> with the name or ID of the instance. Force the live migration to complete: Replace <instance> with the name or ID of the instance. Replace <migration_id> with the ID of the migration. 14.3.8.3. Instance performance degrades after migration For instances that use a NUMA topology, the source and destination Compute nodes must have the same NUMA topology and configuration. The NUMA topology of the destination Compute node must have sufficient resources available. If the NUMA configuration between the source and destination Compute nodes is not the same, it is possible that live migration succeeds while the instance performance degrades. For example, if the source Compute node maps NIC 1 to NUMA node 0, but the destination Compute node maps NIC 1 to NUMA node 5, after migration the instance might route network traffic from a first CPU across the bus to a second CPU with NUMA node 5 to route traffic to NIC 1. This can result in expected behavior, but degraded performance. Similarly, if NUMA node 0 on the source Compute node has sufficient available CPU and RAM, but NUMA node 0 on the destination Compute node already has instances using some of the resources, the instance might run correctly but suffer performance degradation. For more information, see Migration constraints .
|
[
"[stack@director ~]USD source ~/stackrc",
"parameter_defaults: NovaVNCProxySSLMinimumVersion: <version>",
"parameter_defaults: NovaVNCProxySSLCiphers: <ciphers>",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"parameter_defaults: NovaCronArchiveDeleteRowsPurge: True",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml",
"parameter_defaults: ComputeExtraConfig: nova::workarounds::enable_qemu_monitor_announce_self: true",
"parameter_defaults: NetworkExtraConfig: neutron::server::notifications::nova::live_migration_events: true ComputeExtraConfig: nova::workarounds::enable_qemu_monitor_announce_self: true",
"openstack port show <port_name/port_id>",
"(undercloud)USD source ~/overcloudrc (overcloud)USD openstack compute service list",
"(overcloud)USD openstack server list --host <source> --all-projects",
"(overcloud)USD openstack compute service set <source> nova-compute --disable",
"(overcloud)USD openstack server migrate <instance> --wait",
"(overcloud)USD openstack server list --all-projects",
"(overcloud)USD openstack server resize --confirm <instance>",
"(overcloud)USD openstack server resize --revert <instance>",
"(overcloud)USD openstack server start <instance>",
"(overcloud)USD openstack compute service set <source> nova-compute --enable",
"(overcloud)USD openstack server migrate <instance> --live-migration [--host <dest>] --wait",
"(overcloud)USD openstack server migrate <instance> --live-migration [--host <dest>] --wait --block-migration",
"(overcloud)USD openstack server show <instance> +----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | ... | ... | | status | MIGRATING | | ... | ... | +----------------------+--------------------------------------+",
"(overcloud)USD openstack server list --host <dest> --all-projects",
"(overcloud)USD openstack compute service set <source> nova-compute --enable",
"nova server-migration-list <instance> +----+-------------+----------- (...) | Id | Source Node | Dest Node | (...) +----+-------------+-----------+ (...) | 2 | - | - | (...) +----+-------------+-----------+ (...)",
"nova server-migration-show <instance> <migration_id>",
"+------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | created_at | 2017-03-08T02:53:06.000000 | | dest_compute | controller | | dest_host | - | | dest_node | - | | disk_processed_bytes | 0 | | disk_remaining_bytes | 0 | | disk_total_bytes | 0 | | id | 2 | | memory_processed_bytes | 65502513 | | memory_remaining_bytes | 786427904 | | memory_total_bytes | 1091379200 | | server_uuid | d1df1b5a-70c4-4fed-98b7-423362f2c47c | | source_compute | compute2 | | source_node | - | | status | running | | updated_at | 2017-03-08T02:53:47.000000 | +------------------------+--------------------------------------+",
"(overcloud)USD openstack server list --host <node> --all-projects",
"(overcloud)[stack@director ~]USD openstack baremetal node show <node>",
"(overcloud)[stack@director ~]USD openstack compute service set <node> nova-compute --disable --disable-reason <disable_host_reason>",
"(overcloud)[stack@director ~]USD nova evacuate [--password <pass>] <instance> [<dest>]",
"(overcloud)[stack@director ~]USD openstack hypervisor list",
"(overcloud)[stack@director ~]USD openstack compute service set <node> nova-compute --enable",
"(overcloud)USD openstack server list --host <node> --all-projects",
"(overcloud)[stack@director ~]USD openstack baremetal node show <node>",
"(overcloud)[stack@director ~]USD openstack compute service set <node> nova-compute --disable --disable-reason <disable_host_reason>",
"(overcloud)[stack@director ~]USD nova host-evacuate [--target_host <dest>] <node>",
"(overcloud)[stack@director ~]USD openstack hypervisor list",
"(overcloud)[stack@director ~]USD openstack compute service set <node> nova-compute --enable",
"nova server-migration-list <instance>",
"nova live-migration-abort <instance> <migration_id>",
"nova server-migration-list <instance>",
"nova live-migration-force-complete <instance> <migration_id>"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuring_the_compute_service_for_instance_creation/assembly_managing-instances_managing-instances
|
7.243. sysfsutils
|
7.243. sysfsutils 7.243.1. RHBA-2012:1453 - sysfsutils bug fix update Updated sysfsutils packages that fix one bug are now available for Red Hat Enterprise Linux 6. The sysfsutils packages provide a suite of daemons to manage access to remote directories and authentication mechanisms. The sysfsutils suite provides an NSS and PAM interface toward the system and a pluggable backend system to connect to multiple different account sources. It is also the basis to provide client auditing and policy services for projects like FreeIPA. Bug Fix BZ#671554 Prior to this update, sysfs directories were not closed as expected. As a consequence, the libsysfs library could leak memory in long running programs that frequently opened and closed sysfs directories. This update modifies the underlying code to close sysfs directories as expected. All users of sysfsutils are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/sysfsutils
|
Chapter 5. ironic
|
Chapter 5. ironic The following chapter contains information about the configuration options in the ironic service. 5.1. ironic.conf This section contains options for the /etc/ironic/ironic.conf file. 5.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/ironic/ironic.conf file. . Configuration option = Default value Type Description auth_strategy = keystone string value Authentication strategy used by ironic-api. "noauth" should not be used in a production environment because all authentication will be disabled. backdoor_port = None string value Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file. backdoor_socket = None string value Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. bindir = USDpybasedir/bin string value Directory where ironic binaries are installed. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. debug_tracebacks_in_api = False boolean value Return server tracebacks in the API response for any error responses. WARNING: this is insecure and should not be used in a production environment. default_bios_interface = None string value Default bios interface to be used for nodes that do not have bios_interface field set. A complete list of bios interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.bios" entrypoint. default_boot_interface = None string value Default boot interface to be used for nodes that do not have boot_interface field set. A complete list of boot interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.boot" entrypoint. default_console_interface = None string value Default console interface to be used for nodes that do not have console_interface field set. A complete list of console interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.console" entrypoint. default_deploy_interface = None string value Default deploy interface to be used for nodes that do not have deploy_interface field set. A complete list of deploy interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.deploy" entrypoint. default_inspect_interface = None string value Default inspect interface to be used for nodes that do not have inspect_interface field set. A complete list of inspect interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.inspect" entrypoint. default_log_levels = ['amqp=WARNING', 'amqplib=WARNING', 'qpid.messaging=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'sqlalchemy=WARNING', 'stevedore=INFO', 'eventlet.wsgi.server=INFO', 'iso8601=WARNING', 'requests=WARNING', 'glanceclient=WARNING', 'urllib3.connectionpool=WARNING', 'keystonemiddleware.auth_token=INFO', 'keystoneauth.session=INFO', 'openstack=WARNING'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. default_management_interface = None string value Default management interface to be used for nodes that do not have management_interface field set. A complete list of management interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.management" entrypoint. default_network_interface = None string value Default network interface to be used for nodes that do not have network_interface field set. A complete list of network interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.network" entrypoint. default_portgroup_mode = active-backup string value Default mode for portgroups. Allowed values can be found in the linux kernel documentation on bonding: https://www.kernel.org/doc/Documentation/networking/bonding.txt . default_power_interface = None string value Default power interface to be used for nodes that do not have power_interface field set. A complete list of power interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.power" entrypoint. default_raid_interface = None string value Default raid interface to be used for nodes that do not have raid_interface field set. A complete list of raid interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.raid" entrypoint. default_rescue_interface = None string value Default rescue interface to be used for nodes that do not have rescue_interface field set. A complete list of rescue interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.rescue" entrypoint. default_resource_class = None string value Resource class to use for new nodes when no resource class is provided in the creation request. default_storage_interface = noop string value Default storage interface to be used for nodes that do not have storage_interface field set. A complete list of storage interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.storage" entrypoint. default_vendor_interface = None string value Default vendor interface to be used for nodes that do not have vendor_interface field set. A complete list of vendor interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.vendor" entrypoint. enabled_bios_interfaces = ['no-bios'] list value Specify the list of bios interfaces to load during service initialization. Missing bios interfaces, or bios interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one bios interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented bios interfaces. A complete list of bios interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.bios" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled bios interfaces on every ironic-conductor service. enabled_boot_interfaces = ['pxe'] list value Specify the list of boot interfaces to load during service initialization. Missing boot interfaces, or boot interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one boot interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented boot interfaces. A complete list of boot interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.boot" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled boot interfaces on every ironic-conductor service. enabled_console_interfaces = ['no-console'] list value Specify the list of console interfaces to load during service initialization. Missing console interfaces, or console interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one console interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented console interfaces. A complete list of console interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.console" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled console interfaces on every ironic-conductor service. enabled_deploy_interfaces = ['direct'] list value Specify the list of deploy interfaces to load during service initialization. Missing deploy interfaces, or deploy interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one deploy interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented deploy interfaces. A complete list of deploy interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.deploy" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled deploy interfaces on every ironic-conductor service. enabled_hardware_types = ['ipmi'] list value Specify the list of hardware types to load during service initialization. Missing hardware types, or hardware types which fail to initialize, will prevent the conductor service from starting. This option defaults to a recommended set of production-oriented hardware types. A complete list of hardware types present on your system may be found by enumerating the "ironic.hardware.types" entrypoint. enabled_inspect_interfaces = ['no-inspect'] list value Specify the list of inspect interfaces to load during service initialization. Missing inspect interfaces, or inspect interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one inspect interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented inspect interfaces. A complete list of inspect interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.inspect" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled inspect interfaces on every ironic-conductor service. enabled_management_interfaces = ['ipmitool'] list value Specify the list of management interfaces to load during service initialization. Missing management interfaces, or management interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one management interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented management interfaces. A complete list of management interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.management" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled management interfaces on every ironic-conductor service. enabled_network_interfaces = ['flat', 'noop'] list value Specify the list of network interfaces to load during service initialization. Missing network interfaces, or network interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one network interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented network interfaces. A complete list of network interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.network" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled network interfaces on every ironic-conductor service. enabled_power_interfaces = ['ipmitool'] list value Specify the list of power interfaces to load during service initialization. Missing power interfaces, or power interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one power interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented power interfaces. A complete list of power interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.power" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled power interfaces on every ironic-conductor service. enabled_raid_interfaces = ['agent', 'no-raid'] list value Specify the list of raid interfaces to load during service initialization. Missing raid interfaces, or raid interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one raid interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented raid interfaces. A complete list of raid interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.raid" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled raid interfaces on every ironic-conductor service. enabled_rescue_interfaces = ['no-rescue'] list value Specify the list of rescue interfaces to load during service initialization. Missing rescue interfaces, or rescue interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one rescue interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented rescue interfaces. A complete list of rescue interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.rescue" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled rescue interfaces on every ironic-conductor service. enabled_storage_interfaces = ['cinder', 'noop'] list value Specify the list of storage interfaces to load during service initialization. Missing storage interfaces, or storage interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one storage interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented storage interfaces. A complete list of storage interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.storage" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled storage interfaces on every ironic-conductor service. enabled_vendor_interfaces = ['ipmitool', 'no-vendor'] list value Specify the list of vendor interfaces to load during service initialization. Missing vendor interfaces, or vendor interfaces which fail to initialize, will prevent the ironic-conductor service from starting. At least one vendor interface that is supported by each enabled hardware type must be enabled here, or the ironic-conductor service will not start. Must not be an empty list. The default value is a recommended set of production-oriented vendor interfaces. A complete list of vendor interfaces present on your system may be found by enumerating the "ironic.hardware.interfaces.vendor" entrypoint. When setting this value, please make sure that every enabled hardware type will have the same set of enabled vendor interfaces on every ironic-conductor service. esp_image = None string value Path to EFI System Partition image file. This file is recommended for creating UEFI bootable ISO images efficiently. ESP image should contain a FAT12/16/32-formatted file system holding EFI boot loaders (e.g. GRUB2) for each hardware architecture ironic needs to boot. This option is only used when neither ESP nor ISO deploy image is configured to the node being deployed in which case ironic will attempt to fetch ESP image from the configured location or extract ESP image from UEFI-bootable deploy ISO image. executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. force_raw_images = True boolean value If True, convert backing images to "raw" disk image format. graceful_shutdown_timeout = 60 integer value Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait. grub_config_path = /boot/grub/grub.cfg string value GRUB2 configuration file location on the UEFI ISO images produced by ironic. The default value is usually incorrect and should not be relied on. If you use a GRUB2 image from a certain distribution, use a distribution-specific path here, e.g. EFI/ubuntu/grub.cfg grub_config_template = USDpybasedir/common/grub_conf.template string value Template file for grub configuration file. hash_partition_exponent = 5 integer value Exponent to determine number of hash partitions to use when distributing load across conductors. Larger values will result in more even distribution of load and less load when rebalancing the ring, but more memory usage. Number of partitions per conductor is (2^hash_partition_exponent). This determines the granularity of rebalancing: given 10 hosts, and an exponent of the 2, there are 40 partitions in the ring.A few thousand partitions should make rebalancing smooth in most cases. The default is suitable for up to a few hundred conductors. Configuring for too many partitions has a negative impact on CPU usage. hash_ring_algorithm = md5 string value Hash function to use when building the hash ring. If running on a FIPS system, do not use md5. WARNING: all ironic services in a cluster MUST use the same algorithm at all times. Changing the algorithm requires an offline update. hash_ring_reset_interval = 15 integer value Time (in seconds) after which the hash ring is considered outdated and is refreshed on the access. host = <based on operating system> string value Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. However, the node name must be valid within an AMQP key, and if using ZeroMQ (will be removed in the Stein release), a valid hostname, FQDN, or IP address. http_basic_auth_user_file = /etc/ironic/htpasswd string value Path to Apache format user authentication file used when auth_strategy=http_basic `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. isolinux_bin = /usr/lib/syslinux/isolinux.bin string value Path to isolinux binary file. isolinux_config_template = USDpybasedir/common/isolinux_config.template string value Template file for isolinux configuration file. ldlinux_c32 = None string value Path to ldlinux.c32 file. This file is required for syslinux 5.0 or later. If not specified, the file is looked for in "/usr/lib/syslinux/modules/bios/ldlinux.c32" and "/usr/share/syslinux/ldlinux.c32". log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_in_db_max_size = 4096 integer value Max number of characters of any node last_error/maintenance_reason pushed to database. log_options = True boolean value Enables or disables logging values of all registered options when starting a service (at DEBUG level). log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". minimum_memory_wait_retries = 6 integer value Number of retries to hold onto the worker before failing or returning the thread to the pool if the conductor can automatically retry. minimum_memory_wait_time = 15 integer value Seconds to wait between retries for free memory before launching the process. This, combined with memory_wait_retries allows the conductor to determine how long we should attempt to directly retry. minimum_memory_warning_only = False boolean value Setting to govern if Ironic should only warn instead of attempting to hold back the request in order to prevent the exhaustion of system memory. minimum_required_memory = 1024 integer value Minimum memory in MiB for the system to have available prior to starting a memory intensive process on the conductor. my_ip = <based on operating system> string value IPv4 address of this host. If unset, will determine the IP programmatically. If unable to do so, will use "127.0.0.1". NOTE: This field does accept an IPv6 address as an override for templates and URLs, however it is recommended that [DEFAULT]my_ipv6 is used along with DNS names for service URLs for dual-stack environments. my_ipv6 = None string value IP address of this host using IPv6. This value must be supplied via the configuration and cannot be adequately programmatically determined like the [DEFAULT]my_ip parameter for IPv4. notification_level = None string value Specifies the minimum level for which to send notifications. If not set, no notifications will be sent. The default is for this option to be unset. parallel_image_downloads = False boolean value Run image downloads and raw format conversions in parallel. pecan_debug = False boolean value Enable pecan debug mode. WARNING: this is insecure and should not be used in a production environment. pin_release_version = None string value Used for rolling upgrades. Setting this option downgrades (or pins) the Bare Metal API, the internal ironic RPC communication, and the database objects to their respective versions, so they are compatible with older services. When doing a rolling upgrade from version N to version N+1, set (to pin) this to N. To unpin (default), leave it unset and the latest versions will be used. publish_errors = False boolean value Enables or disables publication of error events. pybasedir = /usr/lib/python3.9/site-packages/ironic string value Directory where the ironic python module is installed. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. raw_image_growth_factor = 2.0 floating point value The scale factor used for estimating the size of a raw image converted from compact image formats such as QCOW2. Default is 2.0, must be greater than 1.0. rootwrap_config = /etc/ironic/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root. rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_ping_enabled = False boolean value Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. rpc_transport = oslo string value Which RPC transport implementation to use between conductor and API services run_external_periodic_tasks = True boolean value Some periodic tasks can be run in a separate process. Should we run them here? state_path = USDpybasedir string value Top-level directory for maintaining ironic's state. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tempdir = /tmp string value Temporary working directory, default is Python temp dir. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. versioned_notifications_topics = ['ironic_versioned_notifications'] list value Specifies the topics for the versioned notifications issued by Ironic. The default value is fine for most deployments and rarely needs to be changed. However, if you have a third-party service that consumes versioned notifications, it might be worth getting a topic for that service. Ironic will send a message containing a versioned notification payload to each topic queue in this list. The list of versioned notifications is visible in https://docs.openstack.org/ironic/latest/admin/notifications.html watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. webserver_connection_timeout = 60 integer value Connection timeout when accessing remote web servers with images. webserver_verify_ca = True string value CA certificates to be used for certificate verification. This can be either a Boolean value or a path to a CA_BUNDLE file.If set to True, the certificates present in the standard path are used to verify the host certificates.If set to False, the conductor will ignore verifying the SSL certificate presented by the host.If it"s a path, conductor uses the specified certificate for SSL verification. If the path does not exist, the behavior is same as when this value is set to True i.e the certificates present in the standard path are used for SSL verification.Defaults to True. 5.1.2. agent The following table outlines the options available under the [agent] group in the /etc/ironic/ironic.conf file. Table 5.1. agent Configuration option = Default value Type Description agent_api_version = v1 string value API version to use for communicating with the ramdisk agent. api_ca_file = None string value Path to the TLS CA that is used to start the bare metal API. In some boot methods this file can be passed to the ramdisk. certificates_path = /var/lib/ironic/certificates string value Path to store auto-generated TLS certificates used to validate connections to the ramdisk. command_timeout = 60 integer value Timeout (in seconds) for IPA commands. command_wait_attempts = 100 integer value Number of attempts to check for asynchronous commands completion before timing out. command_wait_interval = 6 integer value Number of seconds to wait for between checks for asynchronous commands completion. deploy_logs_collect = on_failure string value Whether Ironic should collect the deployment logs on deployment failure (on_failure), always or never. deploy_logs_local_path = /var/log/ironic/deploy string value The path to the directory where the logs should be stored, used when the deploy_logs_storage_backend is configured to "local". deploy_logs_storage_backend = local string value The name of the storage backend where the logs will be stored. deploy_logs_swift_container = ironic_deploy_logs_container string value The name of the Swift container to store the logs, used when the deploy_logs_storage_backend is configured to "swift". deploy_logs_swift_days_to_expire = 30 integer value Number of days before a log object is marked as expired in Swift. If None, the logs will be kept forever or until manually deleted. Used when the deploy_logs_storage_backend is configured to "swift". image_download_source = http string value Specifies whether direct deploy interface should try to use the image source directly or if ironic should cache the image on the conductor and serve it from ironic's own http server. manage_agent_boot = True boolean value Whether Ironic will manage booting of the agent ramdisk. If set to False, you will need to configure your mechanism to allow booting the agent ramdisk. max_command_attempts = 3 integer value This is the maximum number of attempts that will be done for IPA commands that fails due to network problems. memory_consumed_by_agent = 0 integer value The memory size in MiB consumed by agent when it is booted on a bare metal node. This is used for checking if the image can be downloaded and deployed on the bare metal node after booting agent ramdisk. This may be set according to the memory consumed by the agent ramdisk image. neutron_agent_max_attempts = 100 integer value Max number of attempts to validate a Neutron agent status before raising network error for a dead agent. neutron_agent_poll_interval = 2 integer value The number of seconds Neutron agent will wait between polling for device changes. This value should be the same as CONF.AGENT.polling_interval in Neutron configuration. neutron_agent_status_retry_interval = 10 integer value Wait time in seconds between attempts for validating Neutron agent status. post_deploy_get_power_state_retries = 6 integer value Number of times to retry getting power state to check if bare metal node has been powered off after a soft power off. post_deploy_get_power_state_retry_interval = 5 integer value Amount of time (in seconds) to wait between polling power state after trigger soft poweroff. require_tls = False boolean value If set to True, callback URLs without https:// will be rejected by the conductor. stream_raw_images = True boolean value Whether the agent ramdisk should stream raw images directly onto the disk or not. By streaming raw images directly onto the disk the agent ramdisk will not spend time copying the image to a tmpfs partition (therefore consuming less memory) prior to writing it to the disk. Unless the disk where the image will be copied to is really slow, this option should be set to True. Defaults to True. verify_ca = True string value Path to the TLS CA to validate connection to the ramdisk. Set to True to use the system default CA storage. Set to False to disable validation. Ignored when automatic TLS setup is used. 5.1.3. anaconda The following table outlines the options available under the [anaconda] group in the /etc/ironic/ironic.conf file. Table 5.2. anaconda Configuration option = Default value Type Description default_ks_template = USDpybasedir/drivers/modules/ks.cfg.template string value kickstart template to use when no kickstart template is specified in the instance_info or the glance OS image. 5.1.4. ansible The following table outlines the options available under the [ansible] group in the /etc/ironic/ironic.conf file. Table 5.3. ansible Configuration option = Default value Type Description ansible_extra_args = None string value Extra arguments to pass on every invocation of Ansible. ansible_playbook_script = ansible-playbook string value Path to "ansible-playbook" script. Default will search the USDPATH configured for user running ironic-conductor process. Provide the full path when ansible-playbook is not in USDPATH or installed in not default location. config_file_path = USDpybasedir/drivers/modules/ansible/playbooks/ansible.cfg string value Path to ansible configuration file. If set to empty, system default will be used. default_clean_playbook = clean.yaml string value Path (relative to USDplaybooks_path or absolute) to the default playbook used for node cleaning. It may be overridden by per-node ansible_clean_playbook option in node's driver_info field. default_clean_steps_config = clean_steps.yaml string value Path (relative to USDplaybooks_path or absolute) to the default auxiliary cleaning steps file used during the node cleaning. It may be overridden by per-node ansible_clean_steps_config option in node's driver_info field. default_deploy_playbook = deploy.yaml string value Path (relative to USDplaybooks_path or absolute) to the default playbook used for deployment. It may be overridden by per-node ansible_deploy_playbook option in node's driver_info field. default_key_file = None string value Absolute path to the private SSH key file to use by Ansible by default when connecting to the ramdisk over SSH. Default is to use default SSH keys configured for the user running the ironic-conductor service. Private keys with password must be pre-loaded into ssh-agent . It may be overridden by per-node ansible_key_file option in node's driver_info field. default_python_interpreter = None string value Absolute path to the python interpreter on the managed machines. It may be overridden by per-node ansible_python_interpreter option in node's driver_info field. By default, ansible uses /usr/bin/python default_shutdown_playbook = shutdown.yaml string value Path (relative to USDplaybooks_path or absolute) to the default playbook used for graceful in-band shutdown of the node. It may be overridden by per-node ansible_shutdown_playbook option in node's driver_info field. default_username = ansible string value Name of the user to use for Ansible when connecting to the ramdisk over SSH. It may be overridden by per-node ansible_username option in node's driver_info field. extra_memory = 10 integer value Extra amount of memory in MiB expected to be consumed by Ansible-related processes on the node. Affects decision whether image will fit into RAM. image_store_cafile = None string value Specific CA bundle to use for validating SSL connections to the image store. If not specified, CA available in the ramdisk will be used. Is not used by default playbooks included with the driver. Suitable for environments that use self-signed certificates. image_store_certfile = None string value Client cert to use for SSL connections to image store. Is not used by default playbooks included with the driver. image_store_insecure = False boolean value Skip verifying SSL connections to the image store when downloading the image. Setting it to "True" is only recommended for testing environments that use self-signed certificates. image_store_keyfile = None string value Client key to use for SSL connections to image store. Is not used by default playbooks included with the driver. playbooks_path = USDpybasedir/drivers/modules/ansible/playbooks string value Path to directory with playbooks, roles and local inventory. post_deploy_get_power_state_retries = 6 integer value Number of times to retry getting power state to check if bare metal node has been powered off after a soft power off. Value of 0 means do not retry on failure. post_deploy_get_power_state_retry_interval = 5 integer value Amount of time (in seconds) to wait between polling power state after trigger soft poweroff. verbosity = None integer value Set ansible verbosity level requested when invoking "ansible-playbook" command. 4 includes detailed SSH session logging. Default is 4 when global debug is enabled and 0 otherwise. 5.1.5. api The following table outlines the options available under the [api] group in the /etc/ironic/ironic.conf file. Table 5.4. api Configuration option = Default value Type Description api_workers = None integer value Number of workers for OpenStack Ironic API service. The default is equal to the number of CPUs available, but not more than 4. One worker is used if the CPU number cannot be detected. enable_ssl_api = False boolean value Enable the integrated stand-alone API to service requests via HTTPS instead of HTTP. If there is a front-end service performing HTTPS offloading from the service, this option should be False; note, you will want to enable proxy headers parsing with [oslo_middleware]enable_proxy_headers_parsing option or configure [api]public_endpoint option to set URLs in responses to the SSL terminated one. host_ip = 0.0.0.0 host address value The IP address or hostname on which ironic-api listens. max_limit = 1000 integer value The maximum number of items returned in a single response from a collection resource. network_data_schema = USDpybasedir/api/controllers/v1/network-data-schema.json string value Schema for network data used by this deployment. port = 6385 port value The TCP port on which ironic-api listens. public_endpoint = None string value Public URL to use when building the links to the API resources (for example, "https://ironic.rocks:6384"). If None the links will be built using the request's host URL. If the API is operating behind a proxy, you will want to change this to represent the proxy's URL. Defaults to None. Ignored when proxy headers parsing is enabled via [oslo_middleware]enable_proxy_headers_parsing option. ramdisk_heartbeat_timeout = 300 integer value Maximum interval (in seconds) for agent heartbeats. restrict_lookup = True boolean value Whether to restrict the lookup API to only nodes in certain states. 5.1.6. audit The following table outlines the options available under the [audit] group in the /etc/ironic/ironic.conf file. Table 5.5. audit Configuration option = Default value Type Description audit_map_file = /etc/ironic/api_audit_map.conf string value Path to audit map file for ironic-api service. Used only when API audit is enabled. enabled = False boolean value Enable auditing of API requests (for ironic-api service). `ignore_req_list = ` string value Comma separated list of Ironic REST API HTTP methods to be ignored during audit logging. For example: auditing will not be done on any GET or POST requests if this is set to "GET,POST". It is used only when API audit is enabled. 5.1.7. cinder The following table outlines the options available under the [cinder] group in the /etc/ironic/ironic.conf file. Table 5.6. cinder Configuration option = Default value Type Description action_retries = 3 integer value Number of retries in the case of a failed action (currently only used when detaching volumes). action_retry_interval = 5 integer value Retry interval in seconds in the case of a failed action (only specific actions are retried). auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. retries = 3 integer value Client retries in the case of a failed request connection. service-name = None string value The default service_name for endpoint URL discovery. service-type = volumev3 string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.8. conductor The following table outlines the options available under the [conductor] group in the /etc/ironic/ironic.conf file. Table 5.7. conductor Configuration option = Default value Type Description allow_deleting_available_nodes = True boolean value Allow deleting nodes which are in state available . Defaults to True. allow_provisioning_in_maintenance = True boolean value Whether to allow nodes to enter or undergo deploy or cleaning when in maintenance mode. If this option is set to False, and a node enters maintenance during deploy or cleaning, the process will be aborted after the heartbeat. Automated cleaning or making a node available will also fail. If True (the default), the process will begin and will pause after the node starts heartbeating. Moving it from maintenance will make the process continue. automated_clean = True boolean value Enables or disables automated cleaning. Automated cleaning is a configurable set of steps, such as erasing disk drives, that are performed on the node to ensure it is in a baseline state and ready to be deployed to. This is done after instance deletion as well as during the transition from a "manageable" to "available" state. When enabled, the particular steps performed to clean a node depend on which driver that node is managed by; see the individual driver's documentation for details. NOTE: The introduction of the cleaning operation causes instance deletion to take significantly longer. In an environment where all tenants are trusted (eg, because there is only one tenant), this option could be safely disabled. bootloader = None string value Glance ID, http:// or file:// URL of the EFI system partition image containing EFI boot loader. This image will be used by ironic when building UEFI-bootable ISO out of kernel and ramdisk. Required for UEFI boot from partition images. check_allocations_interval = 60 integer value Interval between checks of orphaned allocations, in seconds. Set to 0 to disable checks. check_provision_state_interval = 60 integer value Interval between checks of provision timeouts, in seconds. Set to 0 to disable checks. check_rescue_state_interval = 60 integer value Interval (seconds) between checks of rescue timeouts. clean_callback_timeout = 1800 integer value Timeout (seconds) to wait for a callback from the ramdisk doing the cleaning. If the timeout is reached the node will be put in the "clean failed" provision state. Set to 0 to disable timeout. clean_step_priority_override = {} dict value Priority to run automated clean steps for both in-band and out of band clean steps, provided in interface.step_name:priority format, e.g. deploy.erase_devices_metadata:123. The option can be specified multiple times to define priorities for multiple steps. If set to 0, this specific step will not run during cleaning. If unset for an inband clean step, will use the priority set in the ramdisk. `conductor_group = ` string value Name of the conductor group to join. Can be up to 255 characters and is case insensitive. This conductor will only manage nodes with a matching "conductor_group" field set on the node. configdrive_swift_container = ironic_configdrive_container string value Name of the Swift container to store config drive data. Used when configdrive_use_object_store is True. configdrive_swift_temp_url_duration = None integer value The timeout (in seconds) after which a configdrive temporary URL becomes invalid. Defaults to deploy_callback_timeout if it is set, otherwise to 1800 seconds. Used when configdrive_use_object_store is True. deploy_callback_timeout = 1800 integer value Timeout (seconds) to wait for a callback from a deploy ramdisk. Set to 0 to disable timeout. deploy_kernel = None string value Glance ID, http:// or file:// URL of the kernel of the default deploy image. deploy_ramdisk = None string value Glance ID, http:// or file:// URL of the initramfs of the default deploy image. enable_mdns = False boolean value Whether to enable publishing the baremetal API endpoint via multicast DNS. force_power_state_during_sync = True boolean value During sync_power_state, should the hardware power state be set to the state recorded in the database (True) or should the database be updated based on the hardware state (False). heartbeat_interval = 10 integer value Seconds between conductor heart beats. heartbeat_timeout = 60 integer value Maximum time (in seconds) since the last check-in of a conductor. A conductor is considered inactive when this time has been exceeded. inspect_wait_timeout = 1800 integer value Timeout (seconds) for waiting for node inspection. 0 - unlimited. node_locked_retry_attempts = 3 integer value Number of attempts to grab a node lock. node_locked_retry_interval = 1 integer value Seconds to sleep between node lock attempts. periodic_max_workers = 8 integer value Maximum number of worker threads that can be started simultaneously by a periodic task. Should be less than RPC thread pool size. power_failure_recovery_interval = 300 integer value Interval (in seconds) between checking the power state for nodes previously put into maintenance mode due to power synchronization failure. A node is automatically moved out of maintenance mode once its power state is retrieved successfully. Set to 0 to disable this check. power_state_change_timeout = 60 integer value Number of seconds to wait for power operations to complete, i.e., so that a baremetal node is in the desired power state. If timed out, the power operation is considered a failure. power_state_sync_max_retries = 3 integer value During sync_power_state failures, limit the number of times Ironic should try syncing the hardware node power state with the node power state in DB require_rescue_password_hashed = False boolean value Option to cause the conductor to not fallback to an un-hashed version of the rescue password, permitting rescue with older ironic-python-agent ramdisks. rescue_callback_timeout = 1800 integer value Timeout (seconds) to wait for a callback from the rescue ramdisk. If the timeout is reached the node will be put in the "rescue failed" provision state. Set to 0 to disable timeout. rescue_kernel = None string value Glance ID, http:// or file:// URL of the kernel of the default rescue image. rescue_password_hash_algorithm = sha256 string value Password hash algorithm to be used for the rescue password. rescue_ramdisk = None string value Glance ID, http:// or file:// URL of the initramfs of the default rescue image. send_sensor_data = False boolean value Enable sending sensor data message via the notification bus send_sensor_data_for_undeployed_nodes = False boolean value The default for sensor data collection is to only collect data for machines that are deployed, however operators may desire to know if there are failures in hardware that is not presently in use. When set to true, the conductor will collect sensor information from all nodes when sensor data collection is enabled via the send_sensor_data setting. send_sensor_data_interval = 600 integer value Seconds between conductor sending sensor data message to ceilometer via the notification bus. send_sensor_data_types = ['ALL'] list value List of comma separated meter types which need to be sent to Ceilometer. The default value, "ALL", is a special value meaning send all the sensor data. send_sensor_data_wait_timeout = 300 integer value The time in seconds to wait for send sensors data periodic task to be finished before allowing periodic call to happen again. Should be less than send_sensor_data_interval value. send_sensor_data_workers = 4 integer value The maximum number of workers that can be started simultaneously for send data from sensors periodic task. soft_power_off_timeout = 600 integer value Timeout (in seconds) of soft reboot and soft power off operation. This value always has to be positive. sync_local_state_interval = 180 integer value When conductors join or leave the cluster, existing conductors may need to update any persistent local state as nodes are moved around the cluster. This option controls how often, in seconds, each conductor will check for nodes that it should "take over". Set it to 0 (or a negative value) to disable the check entirely. sync_power_state_interval = 60 integer value Interval between syncing the node power state to the database, in seconds. Set to 0 to disable syncing. sync_power_state_workers = 8 integer value The maximum number of worker threads that can be started simultaneously to sync nodes power states from the periodic task. workers_pool_size = 100 integer value The size of the workers greenthread pool. Note that 2 threads will be reserved by the conductor itself for handling heart beats and periodic tasks. On top of that, sync_power_state_workers will take up to 7 green threads with the default value of 8. 5.1.9. console The following table outlines the options available under the [console] group in the /etc/ironic/ironic.conf file. Table 5.8. console Configuration option = Default value Type Description kill_timeout = 1 integer value Time (in seconds) to wait for the console subprocess to exit before sending SIGKILL signal. port_range = None string value A range of ports available to be used for the console proxy service running on the host of ironic conductor, in the form of <start>:<stop>. This option is used by both Shellinabox and Socat console socat_address = USDmy_ip IP address value IP address of Socat service running on the host of ironic conductor. Used only by Socat console. subprocess_checking_interval = 1 integer value Time interval (in seconds) for checking the status of console subprocess. subprocess_timeout = 10 integer value Time (in seconds) to wait for the console subprocess to start. terminal = shellinaboxd string value Path to serial console terminal program. Used only by Shell In A Box console. terminal_cert_dir = None string value Directory containing the terminal SSL cert (PEM) for serial console access. Used only by Shell In A Box console. terminal_pid_dir = None string value Directory for holding terminal pid files. If not specified, the temporary directory will be used. terminal_timeout = 600 integer value Timeout (in seconds) for the terminal session to be closed on inactivity. Set to 0 to disable timeout. Used only by Socat console. 5.1.10. cors The following table outlines the options available under the [cors] group in the /etc/ironic/ironic.conf file. Table 5.9. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = [] list value Indicate which header field names may be used during the actual request. allow_methods = ['OPTIONS', 'GET', 'HEAD', 'POST', 'PUT', 'DELETE', 'TRACE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = [] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 5.1.11. database The following table outlines the options available under the [database] group in the /etc/ironic/ironic.conf file. Table 5.10. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1¶m2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_engine = InnoDB string value MySQL engine to use. mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 5.1.12. deploy The following table outlines the options available under the [deploy] group in the /etc/ironic/ironic.conf file. Table 5.11. deploy Configuration option = Default value Type Description configdrive_use_object_store = False boolean value Whether to upload the config drive to object store. Set this option to True to store config drive in a swift endpoint. continue_if_disk_secure_erase_fails = False boolean value Defines what to do if a secure erase operation (NVMe or ATA) fails during cleaning in the Ironic Python Agent. If False, the cleaning operation will fail and the node will be put in clean failed state. If True, shred will be invoked and cleaning will continue. create_configuration_priority = None integer value Priority to run in-band clean step that creates RAID configuration from devices, via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 0 for the GenericHardwareManager). If set to 0, will not run during cleaning. default_boot_mode = bios string value Default boot mode to use when no boot mode is requested in node's driver_info, capabilities or in the instance_info configuration. Currently the default boot mode is "bios", but it will be changed to "uefi in the future. It is recommended to set an explicit value for this option. This option only has effect when management interface supports boot mode management default_boot_option = local string value Default boot option to use when no boot option is requested in node's driver_info. Defaults to "local". Prior to the Ussuri release, the default was "netboot". delete_configuration_priority = None integer value Priority to run in-band clean step that erases RAID configuration from devices, via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 0 for the GenericHardwareManager). If set to 0, will not run during cleaning. disk_erasure_concurrency = 1 integer value Defines the target pool size used by Ironic Python Agent ramdisk to erase disk devices. The number of threads created to erase disks will not exceed this value or the number of disks to be erased. enable_ata_secure_erase = True boolean value Whether to support the use of ATA Secure Erase during the cleaning process. Defaults to True. enable_nvme_secure_erase = True boolean value Whether to support the use of NVMe Secure Erase during the cleaning process. Currently nvme-cli format command is supported with user-data and crypto modes, depending on device capabilities.Defaults to True. erase_devices_metadata_priority = None integer value Priority to run in-band clean step that erases metadata from devices, via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 99 for the GenericHardwareManager). If set to 0, will not run during cleaning. erase_devices_priority = None integer value Priority to run in-band erase devices via the Ironic Python Agent ramdisk. If unset, will use the priority set in the ramdisk (defaults to 10 for the GenericHardwareManager). If set to 0, will not run during cleaning. erase_skip_read_only = False boolean value If the ironic-python-agent should skip read-only devices when running the "erase_devices" clean step where block devices are zeroed out. This requires ironic-python-agent 6.0.0 or greater. By default a read-only device will cause non-metadata based cleaning operations to fail due to the possible operational security risk of data being retained between deployments of the bare metal node. external_callback_url = None string value Agent callback URL of the bare metal API for boot methods such as virtual media, where images could be served outside of the provisioning network. Defaults to the configuration from [service_catalog]. external_http_url = None string value URL of the ironic-conductor node's HTTP server for boot methods such as virtual media, where images could be served outside of the provisioning network. Does not apply when Swift is used. Defaults to http_url. fast_track = False boolean value Whether to allow deployment agents to perform lookup, heartbeat operations during initial states of a machine lifecycle and by-pass the normal setup procedures for a ramdisk. This feature also enables power operations which are part of deployment processes to be bypassed if the ramdisk has performed a heartbeat operation using the fast_track_timeout setting. fast_track_timeout = 300 integer value Seconds for which the last heartbeat event is to be considered valid for the purpose of a fast track sequence. This setting should generally be less than the number of seconds for "Power-On Self Test" and typical ramdisk start-up. This value should not exceed the [api]ramdisk_heartbeat_timeout setting. http_image_subdir = agent_images string value The name of subdirectory under ironic-conductor node's HTTP root path which is used to place instance images for the direct deploy interface, when local HTTP service is incorporated to provide instance image instead of swift tempurls. http_root = /httpboot string value ironic-conductor node's HTTP root path. http_url = None string value ironic-conductor node's HTTP server URL. Example: http://192.1.2.3:8080 power_off_after_deploy_failure = True boolean value Whether to power off a node after deploy failure. Defaults to True. ramdisk_image_download_source = local string value Specifies whether a boot iso image should be served from its own original location using the image source url directly, or if ironic should cache the image on the conductor and serve it from ironic's own http server. shred_final_overwrite_with_zeros = True boolean value Whether to write zeros to a node's block devices after writing random data. This will write zeros to the device even when deploy.shred_random_overwrite_iterations is 0. This option is only used if a device could not be ATA Secure Erased. Defaults to True. shred_random_overwrite_iterations = 1 integer value During shred, overwrite all block devices N times with random data. This is only used if a device could not be ATA Secure Erased. Defaults to 1. 5.1.13. dhcp The following table outlines the options available under the [dhcp] group in the /etc/ironic/ironic.conf file. Table 5.12. dhcp Configuration option = Default value Type Description dhcp_provider = neutron string value DHCP provider to use. "neutron" uses Neutron, and "none" uses a no-op provider. 5.1.14. disk_partitioner The following table outlines the options available under the [disk_partitioner] group in the /etc/ironic/ironic.conf file. Table 5.13. disk_partitioner Configuration option = Default value Type Description check_device_interval = 1 integer value After Ironic has completed creating the partition table, it continues to check for activity on the attached iSCSI device status at this interval prior to copying the image to the node, in seconds check_device_max_retries = 20 integer value The maximum number of times to check that the device is not accessed by another process. If the device is still busy after that, the disk partitioning will be treated as having failed. 5.1.15. disk_utils The following table outlines the options available under the [disk_utils] group in the /etc/ironic/ironic.conf file. Table 5.14. disk_utils Configuration option = Default value Type Description bios_boot_partition_size = 1 integer value Size of BIOS Boot partition in MiB when configuring GPT partitioned systems for local boot in BIOS. dd_block_size = 1M string value Block size to use when writing to the nodes disk. efi_system_partition_size = 200 integer value Size of EFI system partition in MiB when configuring UEFI systems for local boot. image_convert_attempts = 3 integer value Number of attempts to convert an image. image_convert_memory_limit = 2048 integer value Memory limit for "qemu-img convert" in MiB. Implemented via the address space resource limit. partition_detection_attempts = 3 integer value Maximum attempts to detect a newly created partition. partprobe_attempts = 10 integer value Maximum number of attempts to try to read the partition. 5.1.16. drac The following table outlines the options available under the [drac] group in the /etc/ironic/ironic.conf file. Table 5.15. drac Configuration option = Default value Type Description bios_factory_reset_timeout = 600 integer value Maximum time (in seconds) to wait for factory reset of BIOS settings to complete. boot_device_job_status_timeout = 30 integer value Maximum amount of time (in seconds) to wait for the boot device configuration job to transition to the correct state to allow a reboot or power on to complete. config_job_max_retries = 240 integer value Maximum number of retries for the configuration job to complete successfully. query_import_config_job_status_interval = 60 integer value Number of seconds to wait between checking for completed import configuration task query_raid_config_job_status_interval = 120 integer value Interval (in seconds) between periodic RAID job status checks to determine whether the asynchronous RAID configuration was successfully finished or not. 5.1.17. glance The following table outlines the options available under the [glance] group in the /etc/ironic/ironic.conf file. Table 5.16. glance Configuration option = Default value Type Description allowed_direct_url_schemes = [] list value A list of URL schemes that can be downloaded directly via the direct_url. Currently supported schemes: [file]. auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". num_retries = 0 integer value Number of retries when downloading an image from glance. password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = image string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. swift_account = None string value The account that Glance uses to communicate with Swift. The format is "AUTH_uuid". "uuid" is the UUID for the account configured in the glance-api.conf. For example: "AUTH_a422b2-91f3-2f46-74b7-d7c9e8958f5d30". If not set, the default value is calculated based on the ID of the project used to access Swift (as set in the [swift] section). Swift temporary URL format: "endpoint_url/api_version/account/container/object_id" swift_api_version = v1 string value The Swift API version to create a temporary URL for. Defaults to "v1". Swift temporary URL format: "endpoint_url/api_version/account/container/object_id" swift_container = glance string value The Swift container Glance is configured to store its images in. Defaults to "glance", which is the default in glance-api.conf. Swift temporary URL format: "endpoint_url/api_version/account/container/object_id" swift_endpoint_url = None string value The "endpoint" (scheme, hostname, optional port) for the Swift URL of the form "endpoint_url/api_version/account/container/object_id". Do not include trailing "/". For example, use "https://swift.example.com". If using RADOS Gateway, endpoint may also contain /swift path; if it does not, it will be appended. Used for temporary URLs, will be fetched from the service catalog, if not provided. swift_store_multiple_containers_seed = 0 integer value This should match a config by the same name in the Glance configuration file. When set to 0, a single-tenant store will only use one container to store all images. When set to an integer value between 1 and 32, a single-tenant store will use multiple containers to store images, and this value will determine how many containers are created. swift_temp_url_cache_enabled = False boolean value Whether to cache generated Swift temporary URLs. Setting it to true is only useful when an image caching proxy is used. Defaults to False. swift_temp_url_duration = 1200 integer value The length of time in seconds that the temporary URL will be valid for. Defaults to 20 minutes. If some deploys get a 401 response code when trying to download from the temporary URL, try raising this duration. This value must be greater than or equal to the value for swift_temp_url_expected_download_start_delay swift_temp_url_expected_download_start_delay = 0 integer value This is the delay (in seconds) from the time of the deploy request (when the Swift temporary URL is generated) to when the IPA ramdisk starts up and URL is used for the image download. This value is used to check if the Swift temporary URL duration is large enough to let the image download begin. Also if temporary URL caching is enabled this will determine if a cached entry will still be valid when the download starts. swift_temp_url_duration value must be greater than or equal to this option's value. Defaults to 0. swift_temp_url_key = None string value The secret token given to Swift to allow temporary URL downloads. Required for temporary URLs. For the Swift backend, the key on the service project (as set in the [swift] section) is used by default. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.18. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/ironic/ironic.conf file. Table 5.17. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. enabled = False boolean value Enable the health check endpoint at /healthcheck. Note that this is unauthenticated. More information is available at https://docs.openstack.org/oslo.middleware/latest/reference/healthcheck_plugins.html . path = /healthcheck string value The path to respond to healtcheck requests on. 5.1.19. ilo The following table outlines the options available under the [ilo] group in the /etc/ironic/ironic.conf file. Table 5.18. ilo Configuration option = Default value Type Description ca_file = None string value CA certificate file to validate iLO. clean_priority_clear_secure_boot_keys = 0 integer value Priority for clear_secure_boot_keys clean step. This step is not enabled by default. It can be enabled to clear all secure boot keys enrolled with iLO. clean_priority_reset_bios_to_default = 10 integer value Priority for reset_bios_to_default clean step. clean_priority_reset_ilo = 0 integer value Priority for reset_ilo clean step. clean_priority_reset_ilo_credential = 30 integer value Priority for reset_ilo_credential clean step. This step requires "ilo_change_password" parameter to be updated in nodes's driver_info with the new password. clean_priority_reset_secure_boot_keys_to_default = 20 integer value Priority for reset_secure_boot_keys clean step. This step will reset the secure boot keys to manufacturing defaults. client_port = 443 port value Port to be used for iLO operations client_timeout = 60 integer value Timeout (in seconds) for iLO operations default_boot_mode = auto string value Default boot mode to be used in provisioning when "boot_mode" capability is not provided in the "properties/capabilities" of the node. The default is "auto" for backward compatibility. When "auto" is specified, default boot mode will be selected based on boot mode settings on the system. file_permission = 420 integer value File permission for swift-less image hosting with the octal permission representation of file access permissions. This setting defaults to 644 , or as the octal number 0o644 in Python. This setting must be set to the octal number representation, meaning starting with 0o . kernel_append_params = nofb nomodeset vga=normal string value Additional kernel parameters to pass down to the instance kernel. These parameters can be consumed by the kernel or by the applications by reading /proc/cmdline. Mind severe cmdline size limit! Can be overridden by instance_info/kernel_append_params property. oob_erase_devices_job_status_interval = 300 integer value Interval (in seconds) between periodic erase-devices status checks to determine whether the asynchronous out-of-band erase-devices was successfully finished or not. On an average, a 300GB HDD with default pattern "overwrite" would take approximately 9 hours and 300GB SSD with default pattern "block" would take approx. 30 seconds to complete sanitize disk erase. power_wait = 2 integer value Amount of time in seconds to wait in between power operations swift_ilo_container = ironic_ilo_container string value The Swift iLO container to store data. swift_object_expiry_timeout = 900 integer value Amount of time in seconds for Swift objects to auto-expire. use_web_server_for_images = False boolean value Set this to True to use http web server to host floppy images and generated boot ISO. This requires http_root and http_url to be configured in the [deploy] section of the config file. If this is set to False, then Ironic will use Swift to host the floppy images and generated boot_iso. verify_ca = True string value CA certificate to validate iLO. This can be either a Boolean value, a path to a CA_BUNDLE file or directory with certificates of trusted CAs. If set to True the driver will verify the host certificates; if False the driver will ignore verifying the SSL certificate. If it's a path the driver will use the specified certificate or one of the certificates in the directory. Defaults to True. 5.1.20. inspector The following table outlines the options available under the [inspector] group in the /etc/ironic/ironic.conf file. Table 5.19. inspector Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. callback_endpoint_override = None string value endpoint to use as a callback for posting back introspection data when boot is managed by ironic. Standard keystoneauth options are used by default. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. `extra_kernel_params = ` string value extra kernel parameters to pass to the inspection ramdisk when boot is managed by ironic (not ironic-inspector). Pairs key=value separated by spaces. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password power_off = True boolean value whether to power off a node after inspection finishes project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. require_managed_boot = False boolean value require that the in-band inspection boot is fully managed by ironic. Set this to True if your installation of ironic-inspector does not have a separate PXE boot environment. service-name = None string value The default service_name for endpoint URL discovery. service-type = baremetal-introspection string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. status_check_period = 60 integer value period (in seconds) to check status of nodes on inspection system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.21. ipmi The following table outlines the options available under the [ipmi] group in the /etc/ironic/ironic.conf file. Table 5.20. ipmi Configuration option = Default value Type Description additional_retryable_ipmi_errors = [] multi valued Additional errors ipmitool may encounter, specific to the environment it is run in. cipher_suite_versions = [] list value List of possible cipher suites versions that can be supported by the hardware in case the field cipher_suite is not set for the node. command_retry_timeout = 60 integer value Maximum time in seconds to retry retryable IPMI operations. (An operation is retryable, for example, if the requested operation fails because the BMC is busy.) Setting this too high can cause the sync power state periodic task to hang when there are slow or unresponsive BMCs. debug = False boolean value Enables all ipmi commands to be executed with an additional debugging output. This is a separate option as ipmitool can log a substantial amount of misleading text when in this mode. disable_boot_timeout = True boolean value Default timeout behavior whether ironic sends a raw IPMI command to disable the 60 second timeout for booting. Setting this option to False will NOT send that command, the default value is True. It may be overridden by per-node ipmi_disable_boot_timeout option in node's driver_info field. kill_on_timeout = True boolean value Kill ipmitool process invoked by ironic to read node power state if ipmitool process does not exit after command_retry_timeout timeout expires. Recommended setting is True min_command_interval = 5 integer value Minimum time, in seconds, between IPMI operations sent to a server. There is a risk with some hardware that setting this too low may cause the BMC to crash. Recommended setting is 5 seconds. use_ipmitool_retries = False boolean value When set to True and the parameters are supported by ipmitool, the number of retries and the retry interval are passed to ipmitool as parameters, and ipmitool will do the retries. When set to False, ironic will retry the ipmitool commands. Recommended setting is False 5.1.22. irmc The following table outlines the options available under the [irmc] group in the /etc/ironic/ironic.conf file. Table 5.21. irmc Configuration option = Default value Type Description auth_method = basic string value Authentication method to be used for iRMC operations clean_priority_restore_irmc_bios_config = 0 integer value Priority for restore_irmc_bios_config clean step. client_timeout = 60 integer value Timeout (in seconds) for iRMC operations fpga_ids = [] list value List of vendor IDs and device IDs for CPU FPGA to inspect. List items are in format vendorID/deviceID and separated by commas. CPU inspection will use this value to find existence of CPU FPGA in a node. If this option is not defined, then leave out CUSTOM_CPU_FPGA in node traits. Sample fpga_ids value: 0x1000/0x0079,0x2100/0x0080 gpu_ids = [] list value List of vendor IDs and device IDs for GPU device to inspect. List items are in format vendorID/deviceID and separated by commas. GPU inspection will use this value to count the number of GPU device in a node. If this option is not defined, then leave out pci_gpu_devices in capabilities property. Sample gpu_ids value: 0x1000/0x0079,0x2100/0x0080 port = 443 port value Port to be used for iRMC operations query_raid_config_fgi_status_interval = 300 integer value Interval (in seconds) between periodic RAID status checks to determine whether the asynchronous RAID configuration was successfully finished or not. Foreground Initialization (FGI) will start 5 minutes after creating virtual drives. remote_image_server = None string value IP of remote image server remote_image_share_name = share string value share name of remote_image_server remote_image_share_root = /remote_image_share_root string value Ironic conductor node's "NFS" or "CIFS" root path remote_image_share_type = CIFS string value Share type of virtual media `remote_image_user_domain = ` string value Domain name of remote_image_user_name remote_image_user_name = None string value User name of remote_image_server remote_image_user_password = None string value Password of remote_image_user_name sensor_method = ipmitool string value Sensor data retrieval method. snmp_auth_proto = sha string value SNMPv3 message authentication protocol ID. Required for version v3 . Will be ignored if the version of python-scciclient is before 0.10.1. The valid options are sha , sha256 , sha384 and sha512 , while sha is the only supported protocol in iRMC S4 and S5, and from iRMC S6, sha256 , sha384 and sha512 are supported, but sha is not supported any more. snmp_community = public string value SNMP community. Required for versions "v1" and "v2c" snmp_polling_interval = 10 integer value SNMP polling interval in seconds snmp_port = 161 port value SNMP port snmp_priv_proto = aes string value SNMPv3 message privacy (encryption) protocol ID. Required for version v3 . Will be ignored if the version of python-scciclient is before 0.10.1. aes is supported. snmp_security = None string value SNMP security name. Required for version v3 . Will be ignored if driver_info/irmc_snmp_user is set. snmp_version = v2c string value SNMP protocol version 5.1.23. ironic_lib The following table outlines the options available under the [ironic_lib] group in the /etc/ironic/ironic.conf file. Table 5.22. ironic_lib Configuration option = Default value Type Description fatal_exception_format_errors = False boolean value Used if there is a formatting error when generating an exception message (a programming error). If True, raise an exception; if False, use the unformatted message. root_helper = sudo ironic-rootwrap /etc/ironic/rootwrap.conf string value Command that is prefixed to commands that are run as root. If not specified, no commands are run as root. 5.1.24. iscsi The following table outlines the options available under the [iscsi] group in the /etc/ironic/ironic.conf file. Table 5.23. iscsi Configuration option = Default value Type Description conv_flags = None string value Flags that need to be sent to the dd command, to control the conversion of the original file when copying to the host. It can contain several options separated by commas. portal_port = 3260 port value The port number on which the iSCSI portal listens for incoming connections. verify_attempts = 3 integer value Maximum attempts to verify an iSCSI connection is active, sleeping 1 second between attempts. Defaults to 3. 5.1.25. json_rpc The following table outlines the options available under the [json_rpc] group in the /etc/ironic/ironic.conf file. Table 5.24. json_rpc Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_strategy = None string value Authentication strategy used by JSON RPC. Defaults to the global auth_strategy setting. auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to host_ip = :: host address value The IP address or hostname on which JSON RPC will listen. http_basic_auth_user_file = /etc/ironic/htpasswd-json-rpc string value Path to Apache format user authentication file used when auth_strategy=http_basic http_basic_password = None string value Password to use for HTTP Basic authentication client requests. http_basic_username = None string value Name of the user to use for HTTP Basic authentication client requests. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password port = 8089 port value The port to use for JSON RPC project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID use_ssl = False boolean value Whether to use TLS for JSON RPC user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username 5.1.26. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/ironic/ironic.conf file. Table 5.25. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = internal string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 5.1.27. mdns The following table outlines the options available under the [mdns] group in the /etc/ironic/ironic.conf file. Table 5.26. mdns Configuration option = Default value Type Description interfaces = None list value List of IP addresses of interfaces to use for mDNS. Defaults to all interfaces on the system. lookup_attempts = 3 integer value Number of attempts to lookup a service. params = {} dict value Additional parameters to pass for the registered service. registration_attempts = 5 integer value Number of attempts to register a service. Currently has to be larger than 1 because of race conditions in the zeroconf library. 5.1.28. metrics The following table outlines the options available under the [metrics] group in the /etc/ironic/ironic.conf file. Table 5.27. metrics Configuration option = Default value Type Description agent_backend = noop string value Backend for the agent ramdisk to use for metrics. Default possible backends are "noop" and "statsd". agent_global_prefix = None string value Prefix all metric names sent by the agent ramdisk with this value. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name. agent_prepend_host = False boolean value Prepend the hostname to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name. agent_prepend_host_reverse = True boolean value Split the prepended host value by "." and reverse it for metrics sent by the agent ramdisk (to better match the reverse hierarchical form of domain names). agent_prepend_uuid = False boolean value Prepend the node's Ironic uuid to all metric names sent by the agent ramdisk. The format of metric names is [global_prefix.][uuid.][host_name.]prefix.metric_name. backend = noop string value Backend to use for the metrics system. global_prefix = None string value Prefix all metric names with this value. By default, there is no global prefix. The format of metric names is [global_prefix.][host_name.]prefix.metric_name. prepend_host = False boolean value Prepend the hostname to all metric names. The format of metric names is [global_prefix.][host_name.]prefix.metric_name. prepend_host_reverse = True boolean value Split the prepended host value by "." and reverse it (to better match the reverse hierarchical form of domain names). 5.1.29. metrics_statsd The following table outlines the options available under the [metrics_statsd] group in the /etc/ironic/ironic.conf file. Table 5.28. metrics_statsd Configuration option = Default value Type Description agent_statsd_host = localhost string value Host for the agent ramdisk to use with the statsd backend. This must be accessible from networks the agent is booted on. agent_statsd_port = 8125 port value Port for the agent ramdisk to use with the statsd backend. statsd_host = localhost string value Host for use with the statsd backend. statsd_port = 8125 port value Port to use with the statsd backend. 5.1.30. molds The following table outlines the options available under the [molds] group in the /etc/ironic/ironic.conf file. Table 5.29. molds Configuration option = Default value Type Description password = None string value Password for "http" Basic auth. By default set empty. retry_attempts = 3 integer value Retry attempts for saving or getting configuration molds. retry_interval = 3 integer value Retry interval for saving or getting configuration molds. storage = swift string value Configuration mold storage location. Supports "swift" and "http". By default "swift". user = None string value User for "http" Basic auth. By default set empty. 5.1.31. neutron The following table outlines the options available under the [neutron] group in the /etc/ironic/ironic.conf file. Table 5.30. neutron Configuration option = Default value Type Description add_all_ports = False boolean value Option to enable transmission of all ports to neutron when creating ports for provisioning, cleaning, or rescue. This is done without IP addresses assigned to the port, and may be useful in some bonded network configurations. auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file cleaning_network = None string value Neutron network UUID or name for the ramdisk to be booted into for cleaning nodes. Required for "neutron" network interface. It is also required if cleaning nodes when using "flat" network interface or "neutron" DHCP provider. If a name is provided, it must be unique among all networks or cleaning will fail. cleaning_network_security_groups = [] list value List of Neutron Security Group UUIDs to be applied during cleaning of the nodes. Optional for the "neutron" network interface and not used for the "flat" or "noop" network interfaces. If not specified, default security group is used. collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. dhcpv6_stateful_address_count = 4 integer value Number of IPv6 addresses to allocate for ports created for provisioning, cleaning, rescue or inspection on DHCPv6-stateful networks. Different stages of the chain-loading process will request addresses with different CLID/IAID. Due to non-identical identifiers multiple addresses must be reserved for the host to ensure each step of the boot process can successfully lease addresses. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. inspection_network = None string value Neutron network UUID or name for the ramdisk to be booted into for in-band inspection of nodes. If a name is provided, it must be unique among all networks or inspection will fail. inspection_network_security_groups = [] list value List of Neutron Security Group UUIDs to be applied during the node inspection process. Optional for the "neutron" network interface and not used for the "flat" or "noop" network interfaces. If not specified, the default security group is used. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password port_setup_delay = 0 integer value Delay value to wait for Neutron agents to setup sufficient DHCP configuration for port. project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to provisioning_network = None string value Neutron network UUID or name for the ramdisk to be booted into for provisioning nodes. Required for "neutron" network interface. If a name is provided, it must be unique among all networks or deploy will fail. provisioning_network_security_groups = [] list value List of Neutron Security Group UUIDs to be applied during provisioning of the nodes. Optional for the "neutron" network interface and not used for the "flat" or "noop" network interfaces. If not specified, default security group is used. region-name = None string value The default region_name for endpoint URL discovery. request_timeout = 45 integer value Timeout for request processing when interacting with Neutron. This value should be increased if neutron port action timeouts are observed as neutron performs pre-commit validation prior returning to the API client which can take longer than normal client/server interactions. rescuing_network = None string value Neutron network UUID or name for booting the ramdisk for rescue mode. This is not the network that the rescue ramdisk will use post-boot - the tenant network is used for that. Required for "neutron" network interface, if rescue mode will be used. It is not used for the "flat" or "noop" network interfaces. If a name is provided, it must be unique among all networks or rescue will fail. rescuing_network_security_groups = [] list value List of Neutron Security Group UUIDs to be applied during the node rescue process. Optional for the "neutron" network interface and not used for the "flat" or "noop" network interfaces. If not specified, the default security group is used. retries = 3 integer value DEPRECATED: Client retries in the case of a failed request. service-name = None string value The default service_name for endpoint URL discovery. service-type = network string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.32. nova The following table outlines the options available under the [nova] group in the /etc/ironic/ironic.conf file. Table 5.31. nova Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. send_power_notifications = True boolean value When set to True, it will enable the support for power state change callbacks to nova. This option should be set to False in deployments that do not have the openstack compute service. service-name = None string value The default service_name for endpoint URL discovery. service-type = compute string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.33. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/ironic/ironic.conf file. Table 5.32. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 5.1.34. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/ironic/ironic.conf file. Table 5.33. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 5.1.35. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/ironic/ironic.conf file. Table 5.34. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate `ssl_client_cert_file = ` string value Client certificate PEM file used for authentication. `ssl_client_key_file = ` string value Client key PEM file used for authentication. `ssl_client_key_password = ` string value Client key password file used for authentication. 5.1.36. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/ironic/ironic.conf file. Table 5.35. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 5.1.37. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/ironic/ironic.conf file. Table 5.36. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. This option should be set to True only for the wsgi services. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 5.1.38. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/ironic/ironic.conf file. Table 5.37. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. 5.1.39. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/ironic/ironic.conf file. Table 5.38. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 5.1.40. oslo_reports The following table outlines the options available under the [oslo_reports] group in the /etc/ironic/ironic.conf file. Table 5.39. oslo_reports Configuration option = Default value Type Description file_event_handler = None string value The path to a file to watch for changes to trigger the reports, instead of signals. Setting this option disables the signal trigger for the reports. If application is running as a WSGI application it is recommended to use this instead of signals. file_event_handler_interval = 1 integer value How many seconds to wait between polls when file_event_handler is set log_dir = None string value Path to a log directory where to create a file 5.1.41. profiler The following table outlines the options available under the [profiler] group in the /etc/ironic/ironic.conf file. Table 5.40. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 5.1.42. pxe The following table outlines the options available under the [pxe] group in the /etc/ironic/ironic.conf file. Table 5.41. pxe Configuration option = Default value Type Description boot_retry_check_interval = 90 integer value Interval (in seconds) between periodic checks on PXE boot retry. Has no effect if boot_retry_timeout is not set. boot_retry_timeout = None integer value Timeout (in seconds) after which PXE boot should be retried. Must be less than [conductor]deploy_callback_timeout. Disabled by default. default_ephemeral_format = ext4 string value Default file system format for ephemeral partition, if one is created. dir_permission = None integer value The permission that will be applied to the TFTP folders upon creation. This should be set to the permission such that the tftpserver has access to read the contents of the configured TFTP folder. This setting is only required when the operating system's umask is restrictive such that ironic-conductor is creating files that cannot be read by the TFTP server. Setting to <None> will result in the operating system's umask to be utilized for the creation of new tftp folders. It is recommended that an octal representation is specified. For example: 0o755 enable_netboot_fallback = False boolean value If True, generate a PXE environment even for nodes that use local boot. This is useful when the driver cannot switch nodes to local boot, e.g. with SNMP or with Redfish on machines that cannot do persistent boot. Mostly useful for standalone ironic since Neutron will prevent incorrect PXE boot. image_cache_size = 20480 integer value Maximum size (in MiB) of cache for master images, including those in use. image_cache_ttl = 10080 integer value Maximum TTL (in minutes) for old master images in cache. images_path = /var/lib/ironic/images/ string value On the ironic-conductor node, directory where images are stored on disk. instance_master_path = /var/lib/ironic/master_images string value On the ironic-conductor node, directory where master instance images are stored on disk. Setting to the empty string disables image caching. ip_version = 4 string value The IP version that will be used for PXE booting. Defaults to 4. EXPERIMENTAL ipxe_boot_script = USDpybasedir/drivers/modules/boot.ipxe string value On ironic-conductor node, the path to the main iPXE script file. ipxe_bootfile_name = undionly.kpxe string value Bootfile DHCP parameter. ipxe_bootfile_name_by_arch = {} dict value Bootfile DHCP parameter per node architecture. For example: aarch64:ipxe_aa64.efi ipxe_config_template = USDpybasedir/drivers/modules/ipxe_config.template string value On ironic-conductor node, template file for iPXE operations. ipxe_timeout = 0 integer value Timeout value (in seconds) for downloading an image via iPXE. Defaults to 0 (no timeout) ipxe_use_swift = False boolean value Download deploy and rescue images directly from swift using temporary URLs. If set to false (default), images are downloaded to the ironic-conductor node and served over its local HTTP server. Applicable only when ipxe compatible boot interface is used. pxe_append_params = nofb nomodeset vga=normal string value Additional append parameters for baremetal PXE boot. pxe_bootfile_name = pxelinux.0 string value Bootfile DHCP parameter. pxe_bootfile_name_by_arch = {} dict value Bootfile DHCP parameter per node architecture. For example: aarch64:grubaa64.efi pxe_config_subdir = pxelinux.cfg string value Directory in which to create symbolic links which represent the MAC or IP address of the ports on a node and allow boot loaders to load the PXE file for the node. This directory name is relative to the PXE or iPXE folders. pxe_config_template = USDpybasedir/drivers/modules/pxe_config.template string value On ironic-conductor node, template file for PXE loader configuration. pxe_config_template_by_arch = {} dict value On ironic-conductor node, template file for PXE configuration per node architecture. For example: aarch64:/opt/share/grubaa64_pxe_config.template tftp_master_path = /tftpboot/master_images string value On ironic-conductor node, directory where master TFTP images are stored on disk. Setting to the empty string disables image caching. tftp_root = /tftpboot string value ironic-conductor node's TFTP root path. The ironic-conductor must have read/write access to this path. tftp_server = USDmy_ip string value IP address of ironic-conductor node's TFTP server. uefi_ipxe_bootfile_name = ipxe.efi string value Bootfile DHCP parameter for UEFI boot mode. If you experience problems with booting using it, try snponly.efi. uefi_pxe_bootfile_name = bootx64.efi string value Bootfile DHCP parameter for UEFI boot mode. uefi_pxe_config_template = USDpybasedir/drivers/modules/pxe_grub_config.template string value On ironic-conductor node, template file for PXE configuration for UEFI boot loader. Generally this is used for GRUB specific templates. 5.1.43. redfish The following table outlines the options available under the [redfish] group in the /etc/ironic/ironic.conf file. Table 5.42. redfish Configuration option = Default value Type Description auth_type = auto string value Redfish HTTP client authentication method. connection_attempts = 5 integer value Maximum number of attempts to try to connect to Redfish connection_cache_size = 1000 integer value Maximum Redfish client connection cache size. Redfish driver would strive to reuse authenticated BMC connections (obtained through Redfish Session Service). This option caps the maximum number of connections to maintain. The value of 0 disables client connection caching completely. connection_retry_interval = 4 integer value Number of seconds to wait between attempts to connect to Redfish file_permission = 420 integer value File permission for swift-less image hosting with the octal permission representation of file access permissions. This setting defaults to 644 , or as the octal number 0o644 in Python. This setting must be set to the octal number representation, meaning starting with 0o . firmware_update_fail_interval = 60 integer value Number of seconds to wait between checking for failed firmware update tasks firmware_update_status_interval = 60 integer value Number of seconds to wait between checking for completed firmware update tasks kernel_append_params = nofb nomodeset vga=normal string value Additional kernel parameters to pass down to the instance kernel. These parameters can be consumed by the kernel or by the applications by reading /proc/cmdline. Mind severe cmdline size limit! Can be overridden by instance_info/kernel_append_params property. raid_config_fail_interval = 60 integer value Number of seconds to wait between checking for failed raid config tasks raid_config_status_interval = 60 integer value Number of seconds to wait between checking for completed raid config tasks swift_container = ironic_redfish_container string value The Swift container to store Redfish driver data. Applies only when use_swift is enabled. swift_object_expiry_timeout = 900 integer value Amount of time in seconds for Swift objects to auto-expire. Applies only when use_swift is enabled. use_swift = True boolean value Upload generated ISO images for virtual media boot to Swift, then pass temporary URL to BMC for booting the node. If set to false, images are placed on the ironic-conductor node and served over its local HTTP server. 5.1.44. service_catalog The following table outlines the options available under the [service_catalog] group in the /etc/ironic/ironic.conf file. Table 5.43. service_catalog Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = baremetal string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.45. snmp The following table outlines the options available under the [snmp] group in the /etc/ironic/ironic.conf file. Table 5.44. snmp Configuration option = Default value Type Description power_timeout = 10 integer value Seconds to wait for power action to be completed reboot_delay = 0 integer value Time (in seconds) to sleep between when rebooting (powering off and on again) udp_transport_retries = 5 integer value Maximum number of UDP request retries, 0 means no retries. udp_transport_timeout = 1.0 floating point value Response timeout in seconds used for UDP transport. Timeout should be a multiple of 0.5 seconds and is applicable to each retry. 5.1.46. ssl The following table outlines the options available under the [ssl] group in the /etc/ironic/ironic.conf file. Table 5.45. ssl Configuration option = Default value Type Description ca_file = None string value CA certificate file to use to verify connecting clients. cert_file = None string value Certificate file to use when starting the server securely. ciphers = None string value Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. key_file = None string value Private key file to use when starting the server securely. version = None string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 5.1.47. swift The following table outlines the options available under the [swift] group in the /etc/ironic/ironic.conf file. Table 5.46. swift Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = object-store string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. swift_max_retries = 2 integer value Maximum number of times to retry a Swift request, before failing. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 5.1.48. xclarity The following table outlines the options available under the [xclarity] group in the /etc/ironic/ironic.conf file. Table 5.47. xclarity Configuration option = Default value Type Description manager_ip = None string value IP address of the XClarity Controller. Configuration here is deprecated and will be removed in the Stein release. Please update the driver_info field to use "xclarity_manager_ip" instead password = None string value Password for XClarity Controller username. Configuration here is deprecated and will be removed in the Stein release. Please update the driver_info field to use "xclarity_password" instead port = 443 port value Port to be used for XClarity Controller connection. username = None string value Username for the XClarity Controller. Configuration here is deprecated and will be removed in the Stein release. Please update the driver_info field to use "xclarity_username" instead
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuration_reference/ironic
|
Extension APIs
|
Extension APIs OpenShift Container Platform 4.15 Reference guide for extension APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/extension_apis/index
|
Chapter 62. JmxTransTemplate schema reference
|
Chapter 62. JmxTransTemplate schema reference Used in: JmxTransSpec Property Property type Description deployment DeploymentTemplate Template for JmxTrans Deployment . pod PodTemplate Template for JmxTrans Pods . container ContainerTemplate Template for JmxTrans container. serviceAccount ResourceTemplate Template for the JmxTrans service account.
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-JmxTransTemplate-reference
|
Chapter 2. BMCEventSubscription [metal3.io/v1alpha1]
|
Chapter 2. BMCEventSubscription [metal3.io/v1alpha1] Description BMCEventSubscription is the Schema for the fast eventing API Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 2.1.1. .spec Description Type object Property Type Description context string Arbitrary user-provided context for the event destination string A webhook URL to send events to hostName string A reference to a BareMetalHost httpHeadersRef object A secret containing HTTP headers which should be passed along to the Destination when making a request 2.1.2. .spec.httpHeadersRef Description A secret containing HTTP headers which should be passed along to the Destination when making a request Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 2.1.3. .status Description Type object Property Type Description error string subscriptionID string 2.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/bmceventsubscriptions GET : list objects of kind BMCEventSubscription /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions DELETE : delete collection of BMCEventSubscription GET : list objects of kind BMCEventSubscription POST : create a BMCEventSubscription /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name} DELETE : delete a BMCEventSubscription GET : read the specified BMCEventSubscription PATCH : partially update the specified BMCEventSubscription PUT : replace the specified BMCEventSubscription /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name}/status GET : read status of the specified BMCEventSubscription PATCH : partially update status of the specified BMCEventSubscription PUT : replace status of the specified BMCEventSubscription 2.2.1. /apis/metal3.io/v1alpha1/bmceventsubscriptions Table 2.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind BMCEventSubscription Table 2.2. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscriptionList schema 401 - Unauthorized Empty 2.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions Table 2.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 2.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of BMCEventSubscription Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind BMCEventSubscription Table 2.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.8. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscriptionList schema 401 - Unauthorized Empty HTTP method POST Description create a BMCEventSubscription Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.10. Body parameters Parameter Type Description body BMCEventSubscription schema Table 2.11. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 201 - Created BMCEventSubscription schema 202 - Accepted BMCEventSubscription schema 401 - Unauthorized Empty 2.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the BMCEventSubscription namespace string object name and auth scope, such as for teams and projects Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a BMCEventSubscription Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BMCEventSubscription Table 2.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.18. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BMCEventSubscription Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.20. Body parameters Parameter Type Description body Patch schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BMCEventSubscription Table 2.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.23. Body parameters Parameter Type Description body BMCEventSubscription schema Table 2.24. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 201 - Created BMCEventSubscription schema 401 - Unauthorized Empty 2.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/bmceventsubscriptions/{name}/status Table 2.25. Global path parameters Parameter Type Description name string name of the BMCEventSubscription namespace string object name and auth scope, such as for teams and projects Table 2.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified BMCEventSubscription Table 2.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.28. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified BMCEventSubscription Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.30. Body parameters Parameter Type Description body Patch schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified BMCEventSubscription Table 2.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.33. Body parameters Parameter Type Description body BMCEventSubscription schema Table 2.34. HTTP responses HTTP code Reponse body 200 - OK BMCEventSubscription schema 201 - Created BMCEventSubscription schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/provisioning_apis/bmceventsubscription-metal3-io-v1alpha1
|
Chapter 1. GPU device passthrough: Assigning a host GPU to a single virtual machine
|
Chapter 1. GPU device passthrough: Assigning a host GPU to a single virtual machine Red Hat Virtualization supports PCI VFIO, also called device passthrough, for some NVIDIA PCIe-based GPU devices as non-VGA graphics devices. You can attach one or more host GPUs to a single virtual machine by passing through the host GPU to the virtual machine, in addition to one of the standard emulated graphics interfaces. The virtual machine uses the emulated graphics device for pre-boot and installation, and the GPU takes control when its graphics drivers are loaded. For information on the exact number of host GPUs that you can pass through to a single virtual machine, see the NVIDIA website. To assign a GPU to a virtual machine, follow the steps in these procedures: Enable the I/O Memory Management Unit (IOMMU) on the host machine. Detach the GPU from the host. Attach the GPU to the guest. Install GPU drivers on the guest. Configure Xorg on the guest. These steps are detailed below. Prerequisites Your GPU device supports GPU passthrough mode. Your system is listed as a validated server hardware platform. Your host chipset supports Intel VT-d or AMD-Vi For more information about supported hardware and software, see Validated Platforms in the NVIDIA GPU Software Release Notes . 1.1. Enabling host IOMMU support and blacklisting nouveau I/O Memory Management Unit (IOMMU) support on the host machine is necessary to use a GPU on a virtual machine. Procedure In the Administration Portal, click Compute Hosts . Select a host and click Edit . The Edit Hosts pane appears. Click the Kernel tab. Check the Hostdev Passthrough & SR-IOV checkbox. This checkbox enables IOMMU support for a host with Intel VT-d or AMD Vi by adding intel_iommu=on or amd_iommu=on to the kernel command line. Check the Blacklist Nouveau checkbox. Click OK . Select the host and click Management Maintenance and OK . Click Installation Reinstall . After the reinstallation is finished, reboot the host machine. When the host machine has rebooted, click Management Activate . Note To enable IOMMU support using the command line, edit the grub.conf file in the virtual machine (./entries/rhvh-4.4.<machine id>.conf) to include the option intel_iommu=on . 1.2. Detaching the GPU from the host You cannot add the GPU to the virtual machine if the GPU is bound to the host kernel driver, so you must unbind the GPU device from the host before you can add it to the virtual machine. Host drivers often do not support dynamic unbinding of the GPU, so it is recommended to manually exclude the device from binding to the host drivers. Procedure On the host, identify the device slot name and IDs of the device by running the lspci command. In the following example, a graphics controller such as an NVIDIA Quadro or GRID card is used: The output shows that the NVIDIA GK104 device is installed. It has a graphics controller and an audio controller with the following properties: The device slot name of the graphics controller is 0000:03:00.0 , and the vendor-id:device-id for the graphics controller are 10de:11b4 . The device slot name of the audio controller is 0000:03:00.1 , and the vendor-id:device-id for the audio controller are 10de:0e0a . Prevent the host machine driver from using the GPU device. You can use a vendor-id:device-id with the pci-stub driver. To do this, append the pci-stub.ids option, with the vendor-id:device-id as its value, to the GRUB_CMDLINX_LINUX environment variable located in the /etc/sysconfig/grub configuration file, for example: When adding additional vendor IDs and device IDs for pci-stub, separate them with a comma. Regenerate the boot loader configuration using grub2-mkconfig to include this option: Note When using a UEFI-based host, the target file should be /etc/grub2-efi.cfg . Reboot the host machine. Confirm that IOMMU is enabled, the host device is added to the list of pci-stub.ids, and Nouveau is blacklisted: 1 IOMMU is enabled 2 the host device is added to the list of pci-stub.ids 3 Nouveau is blacklisted 1.3. Attaching the GPU to a Virtual Machine After unbinding the GPU from host kernel driver, you can add it to the virtual machine and enable the correct driver. Procedure Follow the steps in Adding a Host Device to a Virtual Machine in the Virtual Machine Management Guide . Run the virtual machine and log in to it. Install the NVIDIA GPU driver on the virtual machine. Verify that the correct kernel driver is in use for the GPU with the lspci -nnk command. For example: 1.4. Installing the GPU driver on the virtual machine Procedure Run the virtual machine and connect to it using the VNC or SPICE console. Download the driver to the virtual machine. For information on getting the driver, see the Drivers page on the NVIDIA website . Install the GPU driver. Important Linux only: When installing the driver on a Linux guest operating system, you are prompted to update xorg.conf. If you do not update xorg.conf during the installation, you need to update it manually. After the driver finishes installing, reboot the machine. For Windows virtual machines, fully power off the guest from the Administration portal or the VM portal, not from within the guest operating system. Important Windows only: Powering off the virtual machine from within the Windows guest operating system sometimes sends the virtual machine into hibernate mode, which does not completely clear the memory, possibly leading to subsequent problems. Using the Administration portal or the VM portal to power off the virtual machine forces it to fully clean the memory. Connect a monitor to the host GPU output interface and run the virtual machine. Set up NVIDIA vGPU guest software licensing for each vGPU and add the license credentials in the NVIDIA control panel. For more information, see How NVIDIA vGPU Software Licensing Is Enforced in the NVIDIA Virtual GPU Software Documentation . 1.5. Updating and Enabling xorg (Linux Virtual Machines) Before you can use the GPU on the virtual machine, you need to update and enable xorg on the virtual machine. The NVIDIA driver installation should do this automatically. Check if xorg is updated and enabled by viewing /etc/X11/xorg.conf : The first two lines say if it was generated by NVIDIA. For example: Procedure On the virtual machine, generate the xorg.conf file using following command: Copy the xorg.conf file to /etc/X11/xorg.conf using the following command: Reboot the virtual machine. Verify that xorg is updated and enabled by viewing /etc/X11/xorg.conf : Search for the Device section. You should see an entry similar to the following section: The GPU is now assigned to the virtual machine. 1.6. Removing a host GPU from a virtual machine For information on removing a host GPU from a virtual machine, see Removing Host Devices from a Virtual Machine in the Virtual Machine Management Guide .
|
[
"lspci -Dnn | grep -i NVIDIA 0000:03:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [Quadro K4200] [10de:11b4] (rev a1) 0000:03:00.1 Audio device [0403]: NVIDIA Corporation GK104 HDMI Audio Controller [10de:0e0a] (rev a1)",
"GRUB_CMDLINE_LINUX=\"crashkernel=auto resume=/dev/mapper/vg0-lv_swap rd.lvm.lv=vg0/lv_root rd.lvm.lv=vg0/lv_swap rhgb quiet intel_iommu=on pci-stub.ids=10de:11b4,10de:0e0a\"",
"grub2-mkconfig -o /etc/grub2.cfg",
"cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-147.el8.x86_64 root=/dev/mapper/vg0-lv_root ro crashkernel=auto resume=/dev/mapper/vg0-lv_swap rd.lvm.lv=vg0/lv_root rd.lvm.lv=vg0/lv_swap rhgb quiet intel_iommu=on 1 pci-stub.ids=10de:11b4,10de:0e0a 2 rdblacklist=nouveau 3",
"lspci -nnk 00:07.0 VGA compatible controller [0300]: NVIDIA Corporation GK104GL [Quadro K4200] [10de:11b4] (rev a1) Subsystem: Hewlett-Packard Company Device [103c:1096] Kernel driver in use: nvidia Kernel modules: nouveau, nvidia_drm, nvidia",
"cat /etc/X11/xorg.conf",
"cat /etc/X11/xorg.conf nvidia-xconfig: X configuration file generated by nvidia-xconfig nvidia-xconfig: version 390.87 (buildmeister@swio-display-x64-rhel04-14) Tue Aug 21 17:33:38 PDT 2018",
"X -configure",
"cp /root/xorg.conf.new /etc/X11/xorg.conf",
"cat /etc/X11/xorg.conf",
"Section \"Device\" Identifier \"Device0\" Driver \"nvidia\" VendorName \"NVIDIA Corporation\" EndSection"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/setting_up_an_nvidia_gpu_for_a_virtual_machine_in_red_hat_virtualization/assembly_nvidia_gpu_passthrough
|
11.8. Controlling the Selection of Network Device Names
|
11.8. Controlling the Selection of Network Device Names Device naming can be controlled in the following manner: By identifying the network interface device Setting the MAC address in an ifcfg file using the HWADDR directive enables it to be identified by udev . The name will be taken from the string given by the DEVICE directive, which by convention is the same as the ifcfg suffix. For example, ifcfg - enp1s0 . By turning on or off biosdevname The name provided by biosdevname will be used (if biosdevname can determine one). By turning on or off the systemd-udev naming scheme The name provided by systemd-udev will be used (if systemd-udev can determine one).
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-controlling_the_selection_of_network_device_names
|
Chapter 6. Removing OSDs using the OpenShift Data Foundation CLI tool
|
Chapter 6. Removing OSDs using the OpenShift Data Foundation CLI tool 6.1. Removing object storage devices using the OpenShift Data Foundation CLI tool You can use the OpenShift Data Foundation command line interface (CLI) tool to automate the process of object storage device (OSD) removal. This helps to avoid the possible data loss while removing OSDs. Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Identify the OSD that needs to be removed. The OSD that needs removal is in CrashLoopBackOff or Error state. Example output: Run the following command to remove OSD 0 : [Optional] If removal of the OSD affects placement group (PG) status, enter yes-force-destroy-osd . Verify that the last line of the command output contains cephosd: completed removal of OSD 0 . Verify the corresponding deployment is removed:
|
[
"oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide",
"rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none>",
"odf purge-osd 0",
"oc get deployment rook-ceph-osd-0"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/replacing_devices/removing_osds_using_the_openshift_data_foundation_cli_tool
|
Chapter 2. Release notes
|
Chapter 2. Release notes 2.1. Red Hat OpenShift support for Windows Containers release notes 2.1.1. Release notes for Red Hat Windows Machine Config Operator 10.17.0 This release of the WMCO provides bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 10.17.0 were released in 2.1.1.1. New features and improvements 2.1.1.2. Bug fixes 2.2. Release notes for past releases of the Windows Machine Config Operator The following release notes are for versions of the Windows Machine Config Operator (WMCO). For the current version, see Red Hat OpenShift support for Windows Containers release notes . 2.2.1. Release notes for Red Hat Windows Machine Config Operator 10.17.0 This release of the WMCO provides bug fixes for running Windows compute nodes in an OpenShift Container Platform cluster. The components of the WMCO 10.17.0 were released in 2.2.1.1. New features and improvements 2.2.1.2. Bug fixes 2.3. Windows Machine Config Operator prerequisites The following information details the supported platform versions, Windows Server versions, and networking configurations for the Windows Machine Config Operator (WMCO). See the vSphere documentation for any information that is relevant to only that platform. 2.3.1. WMCO supported installation method The WMCO fully supports installing Windows nodes into installer-provisioned infrastructure (IPI) clusters. This is the preferred OpenShift Container Platform installation method. For user-provisioned infrastructure (UPI) clusters, the WMCO supports installing Windows nodes only into a UPI cluster installed with the platform: none field set in the install-config.yaml file (bare-metal or provider-agnostic) and only for the BYOH (Bring Your Own Host) use case. UPI is not supported for any other platform. 2.3.2. WMCO 10.17.0 supported platforms and Windows Server versions The following table lists the Windows Server versions that are supported by WMCO 10.17.0, based on the applicable platform. Windows Server versions not listed are not supported and attempting to use them will cause errors. To prevent these errors, use only an appropriate version for your platform. Platform Supported Windows Server version Amazon Web Services (AWS) Windows Server 2022, OS Build 20348.681 or later [1] Windows Server 2019, version 1809 Microsoft Azure Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 VMware vSphere Windows Server 2022, OS Build 20348.681 or later Google Cloud Platform (GCP) Windows Server 2022, OS Build 20348.681 or later Nutanix Windows Server 2022, OS Build 20348.681 or later Bare metal or provider agnostic Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 For disconnected clusters, the Windows AMI must have the EC2LaunchV2 agent version 2.0.1643 or later installed. For more information, see the Install the latest version of EC2Launch v2 in the AWS documentation. 2.3.3. Supported networking Hybrid networking with OVN-Kubernetes is the only supported networking configuration. See the additional resources below for more information on this functionality. The following tables outline the type of networking configuration and Windows Server versions to use based on your platform. You must specify the network configuration when you install the cluster. Note The WMCO does not support OVN-Kubernetes without hybrid networking or OpenShift SDN. Dual NIC is not supported on WMCO-managed Windows instances. Table 2.1. Platform networking support Platform Supported networking Amazon Web Services (AWS) Hybrid networking with OVN-Kubernetes Microsoft Azure Hybrid networking with OVN-Kubernetes VMware vSphere Hybrid networking with OVN-Kubernetes with a custom VXLAN port Google Cloud Platform (GCP) Hybrid networking with OVN-Kubernetes Nutanix Hybrid networking with OVN-Kubernetes Bare metal or provider agnostic Hybrid networking with OVN-Kubernetes Table 2.2. Hybrid OVN-Kubernetes Windows Server support Hybrid networking with OVN-Kubernetes Supported Windows Server version Default VXLAN port Windows Server 2022, OS Build 20348.681 or later Windows Server 2019, version 1809 Custom VXLAN port Windows Server 2022, OS Build 20348.681 or later Additional resources Hybrid networking 2.4. Windows Machine Config Operator known limitations Note the following limitations when working with Windows nodes managed by the WMCO (Windows nodes): The following OpenShift Container Platform features are not supported on Windows nodes: Image builds OpenShift Pipelines OpenShift Service Mesh OpenShift monitoring of user-defined projects OpenShift Serverless Horizontal Pod Autoscaling Vertical Pod Autoscaling The following Red Hat features are not supported on Windows nodes: Red Hat Insights cost management Red Hat OpenShift Local Dual NIC is not supported on WMCO-managed Windows instances. Windows nodes do not support workloads created by using deployment configs. You can use a deployment or other method to deploy workloads. Red Hat OpenShift support for Windows Containers does not support adding Windows nodes to a cluster through a trunk port. The only supported networking configuration for adding Windows nodes is through an access port that carries traffic for the VLAN. Red Hat OpenShift support for Windows Containers does not support any Windows operating system language other than English (United States). Due to a limitation within the Windows operating system, clusterNetwork CIDR addresses of class E, such as 240.0.0.0 , are not compatible with Windows nodes. Kubernetes has identified the following node feature limitations : Huge pages are not supported for Windows containers. Privileged containers are not supported for Windows containers. Kubernetes has identified several API compatibility issues .
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/windows_container_support_for_openshift/release-notes
|
Migrating from version 3 to 4
|
Migrating from version 3 to 4 OpenShift Container Platform 4.17 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team
|
[
"oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>",
"podman login registry.redhat.io",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"oc run test --image registry.redhat.io/ubi9 --command sleep infinity",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"az login",
"AZURE_RESOURCE_GROUP=Velero_Backups",
"az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1",
"AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"",
"az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot",
"BLOB_CONTAINER=velero",
"az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID",
"AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv`",
"AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv --scopes /subscriptions/USDAZURE_SUBSCRIPTION_ID/resourceGroups/USDAZURE_RESOURCE_GROUP`",
"AZURE_CLIENT_ID=`az ad app credential list --id <your_app_id>`",
"cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')",
"podman login registry.redhat.io",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc",
"registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator",
"containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')",
"podman login registry.redhat.io",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7:/operator.yml ./",
"oc replace --force -f operator.yml",
"oc scale -n openshift-migration --replicas=0 deployment/migration-operator",
"oc scale -n openshift-migration --replicas=1 deployment/migration-operator",
"oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"oc create -f controller.yml",
"oc sa get-token migration-controller -n openshift-migration",
"oc get pods -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"spec: indirectImageMigration: true indirectVolumeMigration: true",
"oc replace -f migplan.yaml -n openshift-migration",
"oc get migplan <migplan> -o yaml -n openshift-migration",
"oc get pv",
"oc get pods --all-namespaces | egrep -v 'Running | Completed'",
"oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'",
"oc get csr -A | grep pending -i",
"oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'",
"oc create token migration-controller -n openshift-migration",
"eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ",
"oc create route passthrough --service=docker-registry --port=5000 -n default",
"oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry",
"az group list",
"{ \"id\": \"/subscriptions/...//resourceGroups/sample-rg-name\", \"location\": \"centralus\", \"name\": \"...\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": { \"kubernetes.io_cluster.sample-ld57c\": \"owned\", \"openshift_creationDate\": \"2019-10-25T23:28:57.988208+00:00\" }, \"type\": \"Microsoft.Resources/resourceGroups\" },",
"podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-controller-rhel8:v1.8):/crane ./",
"oc config view",
"crane tunnel-api [--namespace <namespace>] --destination-context <destination-cluster> --source-context <source-cluster>",
"crane tunnel-api --namespace my_tunnel --destination-context openshift-migration/c131-e-us-east-containers-cloud-ibm-com/admin --source-context default/192-168-122-171-nip-io:8443/admin",
"oc get po -n <namespace>",
"NAME READY STATUS RESTARTS AGE <pod_name> 2/2 Running 0 44s",
"oc logs -f -n <namespace> <pod_name> -c openvpn",
"oc get service -n <namespace>",
"oc sa get-token -n openshift-migration migration-controller",
"oc create route passthrough --service=docker-registry -n default",
"oc create route passthrough --service=image-registry -n openshift-image-registry",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF",
"oc sa get-token migration-controller -n openshift-migration | base64 -w 0",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF",
"oc describe MigCluster <cluster>",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF",
"echo -n \"<key>\" | base64 -w 0 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF",
"oc describe migstorage <migstorage>",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF",
"oc describe migplan <migplan> -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF",
"oc watch migmigration <migmigration> -n openshift-migration",
"Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47",
"- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces",
"- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"",
"- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail",
"- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"",
"oc edit migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2",
"oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1",
"name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims",
"spec: namespaces: - namespace_2 - namespace_1:namespace_2",
"spec: namespaces: - namespace_1:namespace_1",
"spec: namespaces: - namespace_1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false",
"oc edit migrationcontroller -n openshift-migration",
"mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11",
"oc -n openshift-migration get pods | grep log",
"oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7",
"oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 -- /usr/bin/gather_metrics_dump",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"oc get migmigration <migmigration> -o yaml",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>",
"Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>",
"time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93",
"oc get migmigration -n openshift-migration",
"NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s",
"oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration",
"name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>",
"apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0",
"apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15",
"podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>",
"podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>",
"podman pull <registry_url>:<port>/openshift/<image>",
"oc get bc --all-namespaces --template='range .items \"BuildConfig:\" .metadata.namespace/.metadata.name => \"\\t\"\"ImageStream(FROM):\" .spec.strategy.sourceStrategy.from.namespace/.spec.strategy.sourceStrategy.from.name \"\\t\"\"ImageStream(TO):\" .spec.output.to.namespace/.spec.output.to.name end'",
"podman tag <registry_url>:<port>/openshift/<image> \\ 1 <registry_url>:<port>/openshift/<image> 2",
"podman push <registry_url>:<port>/openshift/<image> 1",
"oc get imagestream -n openshift | grep <image>",
"NAME IMAGE REPOSITORY TAGS UPDATED my_image image-registry.openshift-image-registry.svc:5000/openshift/my_image latest 32 seconds ago",
"oc describe migmigration <pod> -n openshift-migration",
"Some or all transfer pods are not running for more than 10 mins on destination cluster",
"oc get namespace <namespace> -o yaml 1",
"oc edit namespace <namespace>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"",
"echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2",
"oc logs <Velero_Pod> -n openshift-migration",
"level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1",
"spec: restic_timeout: 1h 1",
"status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2",
"oc describe <registry-example-migration-rvwcm> -n openshift-migration",
"status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration",
"oc describe <migration-example-rvwcm-98t49>",
"completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>",
"oc logs -f <restic-nr2v5>",
"backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration",
"spec: restic_supplemental_groups: <group_id> 1",
"spec: restic_supplemental_groups: - 5555 - 6666",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF",
"oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1",
"oc scale deployment <deployment> --replicas=<premigration_replicas>",
"apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"",
"oc get pod -n <namespace>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/migrating_from_version_3_to_4/index
|
Installing on Azure
|
Installing on Azure OpenShift Container Platform 4.14 Installing OpenShift Container Platform on Azure Red Hat OpenShift Documentation Team
|
[
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }, { \"cloudName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": false, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id>",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2",
"export RESOURCEGROUP=\"<resource_group>\" \\ 1 LOCATION=\"<location>\" 2",
"export KEYVAULT_NAME=\"<keyvault_name>\" \\ 1 KEYVAULT_KEY_NAME=\"<keyvault_key_name>\" \\ 2 DISK_ENCRYPTION_SET_NAME=\"<disk_encryption_set_name>\" 3",
"export CLUSTER_SP_ID=\"<service_principal_id>\" 1",
"az feature register --namespace \"Microsoft.Compute\" --name \"EncryptionAtHost\"",
"az feature show --namespace Microsoft.Compute --name EncryptionAtHost",
"az provider register -n Microsoft.Compute",
"az group create --name USDRESOURCEGROUP --location USDLOCATION",
"az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION --enable-purge-protection true",
"az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME --protection software",
"KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query \"[id]\" -o tsv)",
"KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name USDKEYVAULT_KEY_NAME --query \"[key.kid]\" -o tsv)",
"az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL",
"DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[identity.principalId]\" -o tsv)",
"az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id USDDES_IDENTITY --key-permissions wrapkey unwrapkey get",
"DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[id]\" -o tsv)",
"az role assignment create --assignee USDCLUSTER_SP_ID --role \"<reader_role>\" \\ 1 --scope USDDES_RESOURCE_ID -o jsonc",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 413.92.2023101700 replicas: 3",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"additionalTrustBundlePolicy: Proxyonly 1 apiVersion: v1 baseDomain: catchall.azure.devcluster.openshift.com 2 compute: 3 - architecture: amd64 hyperthreading: Enabled 4 name: worker platform: {} replicas: 3 controlPlane: 5 architecture: amd64 hyperthreading: Enabled 6 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: user 7 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 8 serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: os4-common 9 cloudName: AzurePublicCloud 10 outboundType: Loadbalancer region: southindia 11 userTags: 12 createdBy: user environment: dev",
"oc get infrastructures.config.openshift.io cluster -o=jsonpath-as-json='{.status.platformStatus.azure.resourceTags}'",
"[ [ { \"key\": \"createdBy\", \"value\": \"user\" }, { \"key\": \"environment\", \"value\": \"dev\" } ] ]",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"az login",
"ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7",
"ls <path_to_ccoctl_output_dir>/manifests",
"azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 13 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 14 region: centralus 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"az login",
"ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7",
"ls <path_to_ccoctl_output_dir>/manifests",
"azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"az login",
"ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7",
"ls <path_to_ccoctl_output_dir>/manifests",
"azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"az login",
"ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7",
"ls <path_to_ccoctl_output_dir>/manifests",
"azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: usgovvirginia resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: UserDefinedRouting 19 cloudName: AzureUSGovernmentCloud 20 pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"\"plan\" : { \"name\": \"rh-ocp-worker\", \"product\": \"rh-ocp-worker\", \"publisher\": \"redhat\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"storageProfile\": { \"imageReference\": { \"offer\": \"rh-ocp-worker\", \"publisher\": \"redhat\", \"sku\": \"rh-ocp-worker\", \"version\": \"413.92.2023101700\" } } }",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity",
"export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`",
"export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role <custom_role> \\ 1 --scope \"USD{RESOURCE_GROUP_ID}\"",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.<architecture>.\"rhel-coreos-extensions\".\"azure-disk\".url'`",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters storageAccount=\"USD{CLUSTER_NAME}sa\" \\ 3 --parameters architecture=\"<architecture>\" 4",
"{ \"USDschema\": \"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\", \"contentVersion\": \"1.0.0.0\", \"parameters\": { \"architecture\": { \"type\": \"string\", \"metadata\": { \"description\": \"The architecture of the Virtual Machines\" }, \"defaultValue\": \"x64\", \"allowedValues\": [ \"Arm64\", \"x64\" ] }, \"baseName\": { \"type\": \"string\", \"minLength\": 1, \"metadata\": { \"description\": \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"storageAccount\": { \"type\": \"string\", \"metadata\": { \"description\": \"The Storage Account name\" } }, \"vhdBlobURL\": { \"type\": \"string\", \"metadata\": { \"description\": \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\": { \"location\": \"[resourceGroup().location]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\": \"[parameters('baseName')]\", \"imageNameGen2\": \"[concat(parameters('baseName'), '-gen2')]\", \"imageRelease\": \"1.0.0\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"Microsoft.Compute/galleries\", \"name\": \"[variables('galleryName')]\", \"location\": \"[variables('location')]\", \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageName')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V1\", \"identifier\": { \"offer\": \"rhcos\", \"publisher\": \"RedHat\", \"sku\": \"basic\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageName')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] }, { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageNameGen2')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V2\", \"identifier\": { \"offer\": \"rhcos-gen2\", \"publisher\": \"RedHat-gen2\", \"sku\": \"gen2\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageNameGen2')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] } ] } ] }",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip-v4\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"[variables('masterLoadBalancerName')]\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip-v4')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"clusterNsgName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"defaultValue\" : \"\", \"metadata\" : { \"description\" : \"unused\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300",
"az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }, { \"cloudName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": false, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id>",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"\"plan\" : { \"name\": \"rh-ocp-worker\", \"product\": \"rh-ocp-worker\", \"publisher\": \"redhat\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"storageProfile\": { \"imageReference\": { \"offer\": \"rh-ocp-worker\", \"publisher\": \"redhat\", \"sku\": \"rh-ocp-worker\", \"version\": \"413.92.2023101700\" } } }",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity",
"export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`",
"export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role <custom_role> \\ 1 --scope \"USD{RESOURCE_GROUP_ID}\"",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.<architecture>.\"rhel-coreos-extensions\".\"azure-disk\".url'`",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters storageAccount=\"USD{CLUSTER_NAME}sa\" \\ 3 --parameters architecture=\"<architecture>\" 4",
"{ \"USDschema\": \"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\", \"contentVersion\": \"1.0.0.0\", \"parameters\": { \"architecture\": { \"type\": \"string\", \"metadata\": { \"description\": \"The architecture of the Virtual Machines\" }, \"defaultValue\": \"x64\", \"allowedValues\": [ \"Arm64\", \"x64\" ] }, \"baseName\": { \"type\": \"string\", \"minLength\": 1, \"metadata\": { \"description\": \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"storageAccount\": { \"type\": \"string\", \"metadata\": { \"description\": \"The Storage Account name\" } }, \"vhdBlobURL\": { \"type\": \"string\", \"metadata\": { \"description\": \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\": { \"location\": \"[resourceGroup().location]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\": \"[parameters('baseName')]\", \"imageNameGen2\": \"[concat(parameters('baseName'), '-gen2')]\", \"imageRelease\": \"1.0.0\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"Microsoft.Compute/galleries\", \"name\": \"[variables('galleryName')]\", \"location\": \"[variables('location')]\", \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageName')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V1\", \"identifier\": { \"offer\": \"rhcos\", \"publisher\": \"RedHat\", \"sku\": \"basic\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageName')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] }, { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageNameGen2')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V2\", \"identifier\": { \"offer\": \"rhcos-gen2\", \"publisher\": \"RedHat-gen2\", \"sku\": \"gen2\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageNameGen2')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] } ] } ] }",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip-v4\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"[variables('masterLoadBalancerName')]\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip-v4')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"clusterNsgName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"defaultValue\" : \"\", \"metadata\" : { \"description\" : \"unused\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300",
"az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"networkResourceGroupName: <vnet_resource_group> 1 virtualNetwork: <vnet> 2 controlPlaneSubnet: <control_plane_subnet> 3 computeSubnet: <compute_subnet> 4",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: centralus 14 resourceGroupName: existing_resource_group 15 networkResourceGroupName: vnet_resource_group 16 virtualNetwork: vnet 17 controlPlaneSubnet: control_plane_subnet 18 computeSubnet: compute_subnet 19 outboundType: UserDefinedRouting 20 cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 25 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev publish: Internal 26",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"az login",
"ccoctl azure create-all --name=<azure_infra_name> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --region=<azure_region> \\ 3 --subscription-id=<azure_subscription_id> \\ 4 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 5 --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \\ 6 --tenant-id=<azure_tenant_id> 7",
"ls <path_to_ccoctl_output_dir>/manifests",
"azure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"apiVersion: v1 baseDomain: example.com platform: azure: resourceGroupName: <azure_infra_name> 1",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: null name: cluster spec: mastersSchedulable: true policy: name: \"\" status: {}",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl azure delete --name=<name> \\ 1 --region=<azure_region> \\ 2 --subscription-id=<azure_subscription_id> \\ 3 --delete-oidc-resource-group",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"compute: platform: azure: encryptionAtHost:",
"compute: platform: azure: osDisk: diskSizeGB:",
"compute: platform: azure: osDisk: diskType:",
"compute: platform: azure: ultraSSDCapability:",
"compute: platform: azure: osDisk: diskEncryptionSet: resourceGroup:",
"compute: platform: azure: osDisk: diskEncryptionSet: name:",
"compute: platform: azure: osDisk: diskEncryptionSet: subscriptionId:",
"compute: platform: azure: osImage: publisher:",
"compute: platform: azure: osImage: offer:",
"compute: platform: azure: osImage: sku:",
"compute: platform: azure: osImage: version:",
"compute: platform: azure: vmNetworkingType:",
"compute: platform: azure: type:",
"compute: platform: azure: zones:",
"compute: platform: azure: settings: securityType:",
"compute: platform: azure: settings: confidentialVM: uefiSettings: secureBoot:",
"compute: platform: azure: settings: confidentialVM: uefiSettings: virtualizedTrustedPlatformModule:",
"compute: platform: azure: settings: trustedLaunch: uefiSettings: secureBoot:",
"compute: platform: azure: settings: trustedLaunch: uefiSettings: virtualizedTrustedPlatformModule:",
"compute: platform: azure: osDisk: securityProfile: securityEncryptionType:",
"controlPlane: platform: azure: settings: securityType:",
"controlPlane: platform: azure: settings: confidentialVM: uefiSettings: secureBoot:",
"controlPlane: platform: azure: settings: confidentialVM: uefiSettings: virtualizedTrustedPlatformModule:",
"controlPlane: platform: azure: settings: trustedLaunch: uefiSettings: secureBoot:",
"controlPlane: platform: azure: settings: trustedLaunch: uefiSettings: virtualizedTrustedPlatformModule:",
"controlPlane: platform: azure: osDisk: securityProfile: securityEncryptionType:",
"controlPlane: platform: azure: type:",
"controlPlane: platform: azure: zones:",
"platform: azure: defaultMachinePlatform: settings: securityType:",
"platform: azure: defaultMachinePlatform: settings: confidentialVM: uefiSettings: secureBoot:",
"platform: azure: defaultMachinePlatform: settings: confidentialVM: uefiSettings: virtualizedTrustedPlatformModule:",
"platform: azure: defaultMachinePlatform: settings: trustedLaunch: uefiSettings: secureBoot:",
"platform: azure: defaultMachinePlatform: settings: trustedLaunch: uefiSettings: virtualizedTrustedPlatformModule:",
"platform: azure: defaultMachinePlatform: osDisk: securityProfile: securityEncryptionType:",
"platform: azure: defaultMachinePlatform: encryptionAtHost:",
"platform: azure: defaultMachinePlatform: osDisk: diskEncryptionSet: name:",
"platform: azure: defaultMachinePlatform: osDisk: diskEncryptionSet: resourceGroup:",
"platform: azure: defaultMachinePlatform: osDisk: diskEncryptionSet: subscriptionId:",
"platform: azure: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: azure: defaultMachinePlatform: osDisk: diskType:",
"platform: azure: defaultMachinePlatform: osImage: publisher:",
"platform: azure: defaultMachinePlatform: osImage: offer:",
"platform: azure: defaultMachinePlatform: osImage: sku:",
"platform: azure: defaultMachinePlatform: osImage: version:",
"platform: azure: defaultMachinePlatform: type:",
"platform: azure: defaultMachinePlatform: zones:",
"controlPlane: platform: azure: encryptionAtHost:",
"controlPlane: platform: azure: osDisk: diskEncryptionSet: resourceGroup:",
"controlPlane: platform: azure: osDisk: diskEncryptionSet: name:",
"controlPlane: platform: azure: osDisk: diskEncryptionSet: subscriptionId:",
"controlPlane: platform: azure: osDisk: diskSizeGB:",
"controlPlane: platform: azure: osDisk: diskType:",
"controlPlane: platform: azure: osImage: publisher:",
"controlPlane: platform: azure: osImage: offer:",
"controlPlane: platform: azure: osImage: sku:",
"controlPlane: platform: azure: osImage: version:",
"controlPlane: platform: azure: ultraSSDCapability:",
"controlPlane: platform: azure: vmNetworkingType:",
"platform: azure: baseDomainResourceGroupName:",
"platform: azure: resourceGroupName:",
"platform: azure: outboundType:",
"platform: azure: region:",
"platform: azure: zone:",
"platform: azure: defaultMachinePlatform: ultraSSDCapability:",
"platform: azure: networkResourceGroupName:",
"platform: azure: virtualNetwork:",
"platform: azure: controlPlaneSubnet:",
"platform: azure: computeSubnet:",
"platform: azure: cloudName:",
"platform: azure: defaultMachinePlatform: vmNetworkingType:"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/installing_on_azure/index
|
Chapter 4. Controlling pod placement onto nodes (scheduling)
|
Chapter 4. Controlling pod placement onto nodes (scheduling) 4.1. Controlling pod placement using the scheduler Pod scheduling is an internal process that determines placement of new pods onto nodes within the cluster. The scheduler code has a clean separation that watches new pods as they get created and identifies the most suitable node to host them. It then creates bindings (pod to node bindings) for the pods using the master API. Default pod scheduling OpenShift Container Platform comes with a default scheduler that serves the needs of most users. The default scheduler uses both inherent and customization tools to determine the best fit for a pod. Advanced pod scheduling In situations where you might want more control over where new pods are placed, the OpenShift Container Platform advanced scheduling features allow you to configure a pod so that the pod is required or has a preference to run on a particular node or alongside a specific pod. You can control pod placement by using the following scheduling features: Scheduler profiles Pod affinity and anti-affinity rules Node affinity Node selectors Taints and tolerations Node overcommitment 4.1.1. About the default scheduler The default OpenShift Container Platform pod scheduler is responsible for determining the placement of new pods onto nodes within the cluster. It reads data from the pod and finds a node that is a good fit based on configured profiles. It is completely independent and exists as a standalone solution. It does not modify the pod; it creates a binding for the pod that ties the pod to the particular node. 4.1.1.1. Understanding default scheduling The existing generic scheduler is the default platform-provided scheduler engine that selects a node to host the pod in a three-step operation: Filters the nodes The available nodes are filtered based on the constraints or requirements specified. This is done by running each node through the list of filter functions called predicates , or filters . Prioritizes the filtered list of nodes This is achieved by passing each node through a series of priority , or scoring , functions that assign it a score between 0 - 10, with 0 indicating a bad fit and 10 indicating a good fit to host the pod. The scheduler configuration can also take in a simple weight (positive numeric value) for each scoring function. The node score provided by each scoring function is multiplied by the weight (default weight for most scores is 1) and then combined by adding the scores for each node provided by all the scores. This weight attribute can be used by administrators to give higher importance to some scores. Selects the best fit node The nodes are sorted based on their scores and the node with the highest score is selected to host the pod. If multiple nodes have the same high score, then one of them is selected at random. 4.1.2. Scheduler use cases One of the important use cases for scheduling within OpenShift Container Platform is to support flexible affinity and anti-affinity policies. 4.1.2.1. Infrastructure topological levels Administrators can define multiple topological levels for their infrastructure (nodes) by specifying labels on nodes. For example: region=r1 , zone=z1 , rack=s1 . These label names have no particular meaning and administrators are free to name their infrastructure levels anything, such as city/building/room. Also, administrators can define any number of levels for their infrastructure topology, with three levels usually being adequate (such as: regions zones racks ). Administrators can specify affinity and anti-affinity rules at each of these levels in any combination. 4.1.2.2. Affinity Administrators should be able to configure the scheduler to specify affinity at any topological level, or even at multiple levels. Affinity at a particular level indicates that all pods that belong to the same service are scheduled onto nodes that belong to the same level. This handles any latency requirements of applications by allowing administrators to ensure that peer pods do not end up being too geographically separated. If no node is available within the same affinity group to host the pod, then the pod is not scheduled. If you need greater control over where the pods are scheduled, see Controlling pod placement on nodes using node affinity rules and Placing pods relative to other pods using affinity and anti-affinity rules . These advanced scheduling features allow administrators to specify which node a pod can be scheduled on and to force or reject scheduling relative to other pods. 4.1.2.3. Anti-affinity Administrators should be able to configure the scheduler to specify anti-affinity at any topological level, or even at multiple levels. Anti-affinity (or 'spread') at a particular level indicates that all pods that belong to the same service are spread across nodes that belong to that level. This ensures that the application is well spread for high availability purposes. The scheduler tries to balance the service pods across all applicable nodes as evenly as possible. If you need greater control over where the pods are scheduled, see Controlling pod placement on nodes using node affinity rules and Placing pods relative to other pods using affinity and anti-affinity rules . These advanced scheduling features allow administrators to specify which node a pod can be scheduled on and to force or reject scheduling relative to other pods. 4.2. Scheduling pods using a scheduler profile You can configure OpenShift Container Platform to use a scheduling profile to schedule pods onto nodes within the cluster. 4.2.1. About scheduler profiles You can specify a scheduler profile to control how pods are scheduled onto nodes. The following scheduler profiles are available: LowNodeUtilization This profile attempts to spread pods evenly across nodes to get low resource usage per node. This profile provides the default scheduler behavior. HighNodeUtilization This profile attempts to place as many pods as possible on to as few nodes as possible. This minimizes node count and has high resource usage per node. NoScoring This is a low-latency profile that strives for the quickest scheduling cycle by disabling all score plugins. This might sacrifice better scheduling decisions for faster ones. 4.2.2. Configuring a scheduler profile You can configure the scheduler to use a scheduler profile. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Edit the Scheduler object: USD oc edit scheduler cluster Specify the profile to use in the spec.profile field: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster #... spec: mastersSchedulable: false profile: HighNodeUtilization 1 #... 1 Set to LowNodeUtilization , HighNodeUtilization , or NoScoring . Save the file to apply the changes. 4.3. Placing pods relative to other pods using affinity and anti-affinity rules Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. 4.3.1. Understanding pod affinity Pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key/value labels on other pods. Pod affinity can tell the scheduler to locate a new pod on the same node as other pods if the label selector on the new pod matches the label on the current pod. Pod anti-affinity can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod. For example, using affinity rules, you could spread or pack pods within a service or relative to pods in other services. Anti-affinity rules allow you to prevent pods of a particular service from scheduling on the same nodes as pods of another service that are known to interfere with the performance of the pods of the first service. Or, you could spread the pods of a service across nodes, availability zones, or availability sets to reduce correlated failures. Note A label selector might match pods with multiple pod deployments. Use unique combinations of labels when configuring anti-affinity rules to avoid matching pods. There are two types of pod affinity rules: required and preferred . Required rules must be met before a pod can be scheduled on a node. Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Note Depending on your pod priority and preemption settings, the scheduler might not be able to find an appropriate node for a pod without violating affinity requirements. If so, a pod might not be scheduled. To prevent this situation, carefully configure pod affinity with equal-priority pods. You configure pod affinity/anti-affinity through the Pod spec files. You can specify a required rule, a preferred rule, or both. If you specify both, the node must first meet the required rule, then attempts to meet the preferred rule. The following example shows a Pod spec configured for pod affinity and anti-affinity. In this example, the pod affinity rule indicates that the pod can schedule onto a node only if that node has at least one already-running pod with a label that has the key security and value S1 . The pod anti-affinity rule says that the pod prefers to not schedule onto a node if that node is already running a pod with label having key security and value S2 . Sample Pod config file with pod affinity apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: failure-domain.beta.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod 1 Stanza to configure pod affinity. 2 Defines a required rule. 3 5 The key and value (label) that must be matched to apply the rule. 4 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . Sample Pod config file with pod anti-affinity apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . Note If labels on a node change at runtime such that the affinity rules on a pod are no longer met, the pod continues to run on the node. 4.3.2. Configuring a pod affinity rule The following steps demonstrate a simple two-pod configuration that creates pod with a label and a pod that uses affinity to allow scheduling with that pod. Note You cannot add an affinity directly to a scheduled pod. Procedure Create a pod with a specific label in the pod spec: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: containers: - name: security-s1 image: docker.io/ocpqe/hello-pod Create the pod. USD oc create -f <pod-spec>.yaml When creating other pods, configure the following parameters to add the affinity: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s1-east #... spec affinity 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5 #... 1 Adds a pod affinity. 2 Configures the requiredDuringSchedulingIgnoredDuringExecution parameter or the preferredDuringSchedulingIgnoredDuringExecution parameter. 3 Specifies the key and values that must be met. If you want the new pod to be scheduled with the other pod, use the same key and values parameters as the label on the first pod. 4 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. 5 Specify a topologyKey , which is a prepopulated Kubernetes label that the system uses to denote such a topology domain. Create the pod. USD oc create -f <pod-spec>.yaml 4.3.3. Configuring a pod anti-affinity rule The following steps demonstrate a simple two-pod configuration that creates pod with a label and a pod that uses an anti-affinity preferred rule to attempt to prevent scheduling with that pod. Note You cannot add an affinity directly to a scheduled pod. Procedure Create a pod with a specific label in the pod spec: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: containers: - name: security-s1 image: docker.io/ocpqe/hello-pod Create the pod. USD oc create -f <pod-spec>.yaml When creating other pods, configure the following parameters: Create a YAML file with the following content: apiVersion: v1 kind: Pod metadata: name: security-s2-east #... spec affinity 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6 #... 1 Adds a pod anti-affinity. 2 Configures the requiredDuringSchedulingIgnoredDuringExecution parameter or the preferredDuringSchedulingIgnoredDuringExecution parameter. 3 For a preferred rule, specifies a weight for the node, 1-100. The node that with highest weight is preferred. 4 Specifies the key and values that must be met. If you want the new pod to not be scheduled with the other pod, use the same key and values parameters as the label on the first pod. 5 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. 6 Specifies a topologyKey , which is a prepopulated Kubernetes label that the system uses to denote such a topology domain. Create the pod. USD oc create -f <pod-spec>.yaml 4.3.4. Sample pod affinity and anti-affinity rules The following examples demonstrate pod affinity and pod anti-affinity. 4.3.4.1. Pod Affinity The following example demonstrates pod affinity for pods with matching labels and label selectors. The pod team4 has the label team:4 . apiVersion: v1 kind: Pod metadata: name: team4 labels: team: "4" #... spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod #... The pod team4a has the label selector team:4 under podAffinity . apiVersion: v1 kind: Pod metadata: name: team4a #... spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - "4" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod #... The team4a pod is scheduled on the same node as the team4 pod. 4.3.4.2. Pod Anti-affinity The following example demonstrates pod anti-affinity for pods with matching labels and label selectors. The pod pod-s1 has the label security:s1 . apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 #... spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod #... The pod pod-s2 has the label selector security:s1 under podAntiAffinity . apiVersion: v1 kind: Pod metadata: name: pod-s2 #... spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod #... The pod pod-s2 cannot be scheduled on the same node as pod-s1 . 4.3.4.3. Pod Affinity with no Matching Labels The following example demonstrates pod affinity for pods without matching labels and label selectors. The pod pod-s1 has the label security:s1 . apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 #... spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod #... The pod pod-s2 has the label selector security:s2 . apiVersion: v1 kind: Pod metadata: name: pod-s2 #... spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod #... The pod pod-s2 is not scheduled unless there is a node with a pod that has the security:s2 label. If there is no other pod with that label, the new pod remains in a pending state: Example output NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none> 4.4. Controlling pod placement on nodes using node affinity rules Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. In OpenShift Container Platform node affinity is a set of rules used by the scheduler to determine where a pod can be placed. The rules are defined using custom labels on the nodes and label selectors specified in pods. 4.4.1. Understanding node affinity Node affinity allows a pod to specify an affinity towards a group of nodes it can be placed on. The node does not have control over the placement. For example, you could configure a pod to only run on a node with a specific CPU or in a specific availability zone. There are two types of node affinity rules: required and preferred . Required rules must be met before a pod can be scheduled on a node. Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Note If labels on a node change at runtime that results in an node affinity rule on a pod no longer being met, the pod continues to run on the node. You configure node affinity through the Pod spec file. You can specify a required rule, a preferred rule, or both. If you specify both, the node must first meet the required rule, then attempts to meet the preferred rule. The following example is a Pod spec with a rule that requires the pod be placed on a node with a label whose key is e2e-az-NorthSouth and whose value is either e2e-az-North or e2e-az-South : Example pod configuration file with a node affinity required rule apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod #... 1 The stanza to configure node affinity. 2 Defines a required rule. 3 5 6 The key/value pair (label) that must be matched to apply the rule. 4 The operator represents the relationship between the label on the node and the set of values in the matchExpression parameters in the Pod spec. This value can be In , NotIn , Exists , or DoesNotExist , Lt , or Gt . The following example is a node specification with a preferred rule that a node with a label whose key is e2e-az-EastWest and whose value is either e2e-az-East or e2e-az-West is preferred for the pod: Example pod configuration file with a node affinity preferred rule apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod #... 1 The stanza to configure node affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with highest weight is preferred. 4 6 7 The key/value pair (label) that must be matched to apply the rule. 5 The operator represents the relationship between the label on the node and the set of values in the matchExpression parameters in the Pod spec. This value can be In , NotIn , Exists , or DoesNotExist , Lt , or Gt . There is no explicit node anti-affinity concept, but using the NotIn or DoesNotExist operator replicates that behavior. Note If you are using node affinity and node selectors in the same pod configuration, note the following: If you configure both nodeSelector and nodeAffinity , both conditions must be satisfied for the pod to be scheduled onto a candidate node. If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. If you specify multiple matchExpressions associated with nodeSelectorTerms , then the pod can be scheduled onto a node only if all matchExpressions are satisfied. 4.4.2. Configuring a required node affinity rule Required rules must be met before a pod can be scheduled on a node. Procedure The following steps demonstrate a simple configuration that creates a node and a pod that the scheduler is required to place on the node. Add a label to a node using the oc label node command: USD oc label node node1 e2e-az-name=e2e-az1 Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 #... Create a pod with a specific label in the pod spec: Create a YAML file with the following content: Note You cannot add an affinity directly to a scheduled pod. Example output apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #... 1 Adds a pod affinity. 2 Configures the requiredDuringSchedulingIgnoredDuringExecution parameter. 3 Specifies the key and values that must be met. If you want the new pod to be scheduled on the node you edited, use the same key and values parameters as the label in the node. 4 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. Create the pod: USD oc create -f <file-name>.yaml 4.4.3. Configuring a preferred node affinity rule Preferred rules specify that, if the rule is met, the scheduler tries to enforce the rules, but does not guarantee enforcement. Procedure The following steps demonstrate a simple configuration that creates a node and a pod that the scheduler tries to place on the node. Add a label to a node using the oc label node command: USD oc label node node1 e2e-az-name=e2e-az3 Create a pod with a specific label: Create a YAML file with the following content: Note You cannot add an affinity directly to a scheduled pod. apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #... 1 Adds a pod affinity. 2 Configures the preferredDuringSchedulingIgnoredDuringExecution parameter. 3 Specifies a weight for the node, as a number 1-100. The node with highest weight is preferred. 4 Specifies the key and values that must be met. If you want the new pod to be scheduled on the node you edited, use the same key and values parameters as the label in the node. 5 Specifies an operator . The operator can be In , NotIn , Exists , or DoesNotExist . For example, use the operator In to require the label to be in the node. Create the pod. USD oc create -f <file-name>.yaml 4.4.4. Sample node affinity rules The following examples demonstrate node affinity. 4.4.4.1. Node affinity with matching labels The following example demonstrates node affinity for a node and pod with matching labels: The Node1 node has the label zone:us : USD oc label node node1 zone=us Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #... The pod-s1 pod has the zone and us key/value pair under a required node affinity rule: USD cat pod-s1.yaml Example output apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "zone" operator: In values: - us #... The pod-s1 pod can be scheduled on Node1: USD oc get pod -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1 4.4.4.2. Node affinity with no matching labels The following example demonstrates node affinity for a node and pod without matching labels: The Node1 node has the label zone:emea : USD oc label node node1 zone=emea Tip You can alternatively apply the following YAML to add the label: kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #... The pod-s1 pod has the zone and us key/value pair under a required node affinity rule: USD cat pod-s1.yaml Example output apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: "zone" operator: In values: - us #... The pod-s1 pod cannot be scheduled on Node1: USD oc describe pod pod-s1 Example output ... Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1). 4.4.5. Additional resources Understanding how to update labels on nodes 4.5. Placing pods onto overcommited nodes In an overcommited state, the sum of the container compute resource requests and limits exceeds the resources available on the system. Overcommitment might be desirable in development environments where a trade-off of guaranteed performance for capacity is acceptable. Requests and limits enable administrators to allow and manage the overcommitment of resources on a node. The scheduler uses requests for scheduling your container and providing a minimum service guarantee. Limits constrain the amount of compute resource that may be consumed on your node. 4.5.1. Understanding overcommitment Requests and limits enable administrators to allow and manage the overcommitment of resources on a node. The scheduler uses requests for scheduling your container and providing a minimum service guarantee. Limits constrain the amount of compute resource that may be consumed on your node. OpenShift Container Platform administrators can control the level of overcommit and manage container density on nodes by configuring masters to override the ratio between request and limit set on developer containers. In conjunction with a per-project LimitRange object specifying limits and defaults, this adjusts the container limit and request to achieve the desired level of overcommit. Note That these overrides have no effect if no limits have been set on containers. Create a LimitRange object with default limits, per individual project, or in the project template, to ensure that the overrides apply. After these overrides, the container limits and requests must still be validated by any LimitRange object in the project. It is possible, for example, for developers to specify a limit close to the minimum limit, and have the request then be overridden below the minimum limit, causing the pod to be forbidden. This unfortunate user experience should be addressed with future work, but for now, configure this capability and LimitRange objects with caution. 4.5.2. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output #... vm.overcommit_memory = 0 #... USD sysctl -a |grep panic Example output #... vm.panic_on_oom = 0 #... Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 4.6. Controlling pod placement using node taints Taints and tolerations allow the node to control which pods should (or should not) be scheduled on them. 4.6.1. Understanding taints and tolerations A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration . You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint. Example taint in a node specification apiVersion: v1 kind: Node metadata: name: my-node #... spec: taints: - effect: NoExecute key: key1 value: value1 #... Example toleration in a Pod spec apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Taints and tolerations consist of a key, value, and effect. Table 4.1. Taint and toleration components Parameter Description key The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. value The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. effect The effect is one of the following: NoSchedule [1] New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. operator Equal The key / value / effect parameters must match. This is the default. Exists The key / effect parameters must match. You must leave a blank value parameter, which matches any. If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... A toleration matches a taint: If the operator parameter is set to Equal : the key parameters are the same; the value parameters are the same; the effect parameters are the same. If the operator parameter is set to Exists : the key parameters are the same; the effect parameters are the same. The following taints are built into OpenShift Container Platform: node.kubernetes.io/not-ready : The node is not ready. This corresponds to the node condition Ready=False . node.kubernetes.io/unreachable : The node is unreachable from the node controller. This corresponds to the node condition Ready=Unknown . node.kubernetes.io/memory-pressure : The node has memory pressure issues. This corresponds to the node condition MemoryPressure=True . node.kubernetes.io/disk-pressure : The node has disk pressure issues. This corresponds to the node condition DiskPressure=True . node.kubernetes.io/network-unavailable : The node network is unavailable. node.kubernetes.io/unschedulable : The node is unschedulable. node.cloudprovider.kubernetes.io/uninitialized : When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure : The node has pid pressure. This corresponds to the node condition PIDPressure=True . Important OpenShift Container Platform does not set a default pid.available evictionHard . 4.6.1.1. Understanding how to use toleration seconds to delay pod evictions You can specify how long a pod can remain bound to a node before being evicted by specifying the tolerationSeconds parameter in the Pod specification or MachineSet object. If a taint with the NoExecute effect is added to a node, a pod that does tolerate the taint, which has the tolerationSeconds parameter, the pod is not evicted until that time period expires. Example output apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Here, if this pod is running but does not have a matching toleration, the pod stays bound to the node for 3,600 seconds and then be evicted. If the taint is removed before that time, the pod is not evicted. 4.6.1.2. Understanding how to use multiple taints You can put multiple taints on the same node and multiple tolerations on the same pod. OpenShift Container Platform processes multiple taints and tolerations as follows: Process the taints for which the pod has a matching toleration. The remaining unmatched taints have the indicated effects on the pod: If there is at least one unmatched taint with effect NoSchedule , OpenShift Container Platform cannot schedule a pod onto that node. If there is no unmatched taint with effect NoSchedule but there is at least one unmatched taint with effect PreferNoSchedule , OpenShift Container Platform tries to not schedule the pod onto the node. If there is at least one unmatched taint with effect NoExecute , OpenShift Container Platform evicts the pod from the node if it is already running on the node, or the pod is not scheduled onto the node if it is not yet running on the node. Pods that do not tolerate the taint are evicted immediately. Pods that tolerate the taint without specifying tolerationSeconds in their Pod specification remain bound forever. Pods that tolerate the taint with a specified tolerationSeconds remain bound for the specified amount of time. For example: Add the following taints to the node: USD oc adm taint nodes node1 key1=value1:NoSchedule USD oc adm taint nodes node1 key1=value1:NoExecute USD oc adm taint nodes node1 key2=value2:NoSchedule The pod has the following tolerations: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" #... In this case, the pod cannot be scheduled onto the node, because there is no toleration matching the third taint. The pod continues running if it is already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod. 4.6.1.3. Understanding pod scheduling and node conditions (taint node by condition) The Taint Nodes By Condition feature, which is enabled by default, automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the NoSchedule effect, which means no pod can be scheduled on the node unless the pod has a matching toleration. The scheduler checks for these taints on nodes before scheduling pods. If the taint is present, the pod is scheduled on a different node. Because the scheduler checks for taints and not the actual node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate pod tolerations. To ensure backward compatibility, the daemon set controller automatically adds the following tolerations to all daemons: node.kubernetes.io/memory-pressure node.kubernetes.io/disk-pressure node.kubernetes.io/unschedulable (1.10 or later) node.kubernetes.io/network-unavailable (host network only) You can also add arbitrary tolerations to daemon sets. Note The control plane also adds the node.kubernetes.io/memory-pressure toleration on pods that have a QoS class. This is because Kubernetes manages pods in the Guaranteed or Burstable QoS classes. The new BestEffort pods do not get scheduled onto the affected node. 4.6.1.4. Understanding evicting pods by condition (taint-based evictions) The Taint-Based Evictions feature, which is enabled by default, evicts pods from a node that experiences specific conditions, such as not-ready and unreachable . When a node experiences one of these conditions, OpenShift Container Platform automatically adds taints to the node, and starts evicting and rescheduling the pods on different nodes. Taint Based Evictions have a NoExecute effect, where any pod that does not tolerate the taint is evicted immediately and any pod that does tolerate the taint will never be evicted, unless the pod uses the tolerationSeconds parameter. The tolerationSeconds parameter allows you to specify how long a pod stays bound to a node that has a node condition. If the condition still exists after the tolerationSeconds period, the taint remains on the node and the pods with a matching toleration are evicted. If the condition clears before the tolerationSeconds period, pods with matching tolerations are not removed. If you use the tolerationSeconds parameter with no value, pods are never evicted because of the not ready and unreachable node conditions. Note OpenShift Container Platform evicts pods in a rate-limited way to prevent massive pod evictions in scenarios such as the master becoming partitioned from the nodes. By default, if more than 55% of nodes in a given zone are unhealthy, the node lifecycle controller changes that zone's state to PartialDisruption and the rate of pod evictions is reduced. For small clusters (by default, 50 nodes or less) in this state, nodes in this zone are not tainted and evictions are stopped. For more information, see Rate limits on eviction in the Kubernetes documentation. OpenShift Container Platform automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300 , unless the Pod configuration specifies either toleration. apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #... 1 These tolerations ensure that the default pod behavior is to remain bound for five minutes after one of these node conditions problems is detected. You can configure these tolerations as needed. For example, if you have an application with a lot of local state, you might want to keep the pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding pod eviction. Pods spawned by a daemon set are created with NoExecute tolerations for the following taints with no tolerationSeconds : node.kubernetes.io/unreachable node.kubernetes.io/not-ready As a result, daemon set pods are never evicted because of these node conditions. 4.6.1.5. Tolerating all taints You can configure a pod to tolerate all taints by adding an operator: "Exists" toleration with no key and values parameters. Pods with this toleration are not removed from a node that has taints. Pod spec for tolerating all taints apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - operator: "Exists" #... 4.6.2. Adding taints and tolerations You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with an Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod can remain bound to a node before being evicted. For example: Sample pod configuration file with an Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" 1 effect: "NoExecute" tolerationSeconds: 3600 #... 1 The Exists operator does not take a value . This example places a taint on node1 that has key key1 , value value1 , and taint effect NoExecute . Add a taint to a node by using the following command with the parameters described in the Taint and toleration components table: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 key1=value1:NoExecute This command places a taint on node1 that has key key1 , value value1 , and effect NoExecute . Note If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... The tolerations on the pod match the taint on the node. A pod with either toleration can be scheduled onto node1 . 4.6.2.1. Adding taints and tolerations using a machine set You can add taints to nodes using a machine set. All nodes associated with the MachineSet object are updated with the taint. Tolerations respond to taints added by a machine set in the same manner as taints added directly to the nodes. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod is bound to a node before being evicted. For example: Sample pod configuration file with Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... Add the taint to the MachineSet object: Edit the MachineSet YAML for the nodes you want to taint or you can create a new MachineSet object: USD oc edit machineset <machineset> Add the taint to the spec.template.spec section: Example taint in a machine set specification apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset #... spec: #... template: #... spec: taints: - effect: NoExecute key: key1 value: value1 #... This example places a taint that has the key key1 , value value1 , and taint effect NoExecute on the nodes. Scale down the machine set to 0: USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0 Wait for the machines to be removed. Scale up the machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to start. The taint is added to the nodes associated with the MachineSet object. 4.6.2.2. Binding a user to a node using taints and tolerations If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes or any other nodes in the cluster. If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label. Procedure To configure a node so that users can use only that node: Add a corresponding taint to those nodes: For example: USD oc adm taint nodes node1 dedicated=groupName:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my-node #... spec: taints: - key: dedicated value: groupName effect: NoSchedule #... Add a toleration to the pods by writing a custom admission controller. 4.6.2.3. Creating a project with a node selector and toleration You can create a project that uses a node selector and toleration, which are set as annotations, to control the placement of pods onto specific nodes. Any subsequent resources created in the project are then scheduled on nodes that have a taint matching the toleration. Prerequisites A label for node selection has been added to one or more nodes by using a machine set or editing the node directly. A taint has been added to one or more nodes by using a machine set or editing the node directly. Procedure Create a Project resource definition, specifying a node selector and toleration in the metadata.annotations section: Example project.yaml file kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{"operator": "Exists", "effect": "NoSchedule", "key": "<key_name>"} 3 ] 1 The project name. 2 The default node selector label. 3 The toleration parameters, as described in the Taint and toleration components table. This example uses the NoSchedule effect, which allows existing pods on the node to remain, and the Exists operator, which does not take a value. Use the oc apply command to create the project: USD oc apply -f project.yaml Any subsequent resources created in the <project_name> namespace should now be scheduled on the specified nodes. Additional resources Adding taints and tolerations manually to nodes or with machine sets Creating project-wide node selectors Pod placement of Operator workloads 4.6.2.4. Controlling nodes with special hardware using taints and tolerations In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes. You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware. Procedure To ensure nodes with specialized hardware are reserved for specific pods: Add a toleration to pods that need the special hardware. For example: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "disktype" value: "ssd" operator: "Equal" effect: "NoSchedule" tolerationSeconds: 3600 #... Taint the nodes that have the specialized hardware using one of the following commands: USD oc adm taint nodes <node-name> disktype=ssd:NoSchedule Or: USD oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my_node #... spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #... 4.6.3. Removing taints and tolerations You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure To remove taints and tolerations: To remove a taint from a node: USD oc adm taint nodes <node-name> <key>- For example: USD oc adm taint nodes ip-10-0-132-248.ec2.internal key1- Example output node/ip-10-0-132-248.ec2.internal untainted To remove a toleration from a pod, edit the Pod spec to remove the toleration: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key2" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... 4.7. Placing pods on specific nodes using node selectors A node selector specifies a map of key/value pairs that are defined using custom labels on nodes and selectors specified in pods. For the pod to be eligible to run on a node, the pod must have the same key/value node selector as the label on the node. 4.7.1. About node selectors You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You can use a node selector to place specific pods on specific nodes, cluster-wide node selectors to place new pods on specific nodes anywhere in the cluster, and project node selectors to place new pods in a project on specific nodes. For example, as a cluster administrator, you can create an infrastructure where application developers can deploy pods only onto the nodes closest to their geographical location by including a node selector in every pod they create. In this example, the cluster consists of five data centers spread across two regions. In the U.S., label the nodes as us-east , us-central , or us-west . In the Asia-Pacific region (APAC), label the nodes as apac-east or apac-west . The developers can add a node selector to the pods they create to ensure the pods get scheduled on those nodes. A pod is not scheduled if the Pod object contains a node selector, but no node has a matching label. Important If you are using node selectors and node affinity in the same pod configuration, the following rules control pod placement onto nodes: If you configure both nodeSelector and nodeAffinity , both conditions must be satisfied for the pod to be scheduled onto a candidate node. If you specify multiple nodeSelectorTerms associated with nodeAffinity types, then the pod can be scheduled onto a node if one of the nodeSelectorTerms is satisfied. If you specify multiple matchExpressions associated with nodeSelectorTerms , then the pod can be scheduled onto a node only if all matchExpressions are satisfied. Node selectors on specific pods and nodes You can control which node a specific pod is scheduled on by using node selectors and labels. To use node selectors and labels, first label the node to avoid pods being descheduled, then add the node selector to the pod. Note You cannot add a node selector directly to an existing scheduled pod. You must label the object that controls the pod, such as deployment config. For example, the following Node object has the region: east label: Sample Node object with a label kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux failure-domain.beta.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' failure-domain.beta.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos beta.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 beta.kubernetes.io/arch: amd64 region: east 1 type: user-node #... 1 Labels to match the pod node selector. A pod has the type: user-node,region: east node selector: Sample Pod object with node selectors apiVersion: v1 kind: Pod metadata: name: s1 #... spec: nodeSelector: 1 region: east type: user-node #... 1 Node selectors to match the node label. The node must have a label for each node selector. When you create the pod using the example pod spec, it can be scheduled on the example node. Default cluster-wide node selectors With default cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. For example, the following Scheduler object has the default cluster-wide region=east and type=user-node node selectors: Example Scheduler Operator Custom Resource apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster #... spec: defaultNodeSelector: type=user-node,region=east #... A node in that cluster has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 #... labels: region: east type: user-node #... Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: s1 #... spec: nodeSelector: region: east #... When you create the pod using the example pod spec in the example cluster, the pod is created with the cluster-wide node selector and is scheduled on the labeled node: Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> Note If the project where you create the pod has a project node selector, that selector takes preference over a cluster-wide node selector. Your pod is not created or scheduled if the pod does not have the project node selector. Project node selectors With project node selectors, when you create a pod in this project, OpenShift Container Platform adds the node selectors to the pod and schedules the pods on a node with matching labels. If there is a cluster-wide default node selector, a project node selector takes preference. For example, the following project has the region=east node selector: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: "region=east" #... The following node has the type=user-node,region=east labels: Example Node object apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 #... labels: region: east type: user-node #... When you create the pod using the example pod spec in this example project, the pod is created with the project node selectors and is scheduled on the labeled node: Example Pod object apiVersion: v1 kind: Pod metadata: namespace: east-region #... spec: nodeSelector: region: east type: user-node #... Example pod list with the pod on the labeled node NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none> A pod in the project is not created or scheduled if the pod contains different node selectors. For example, if you deploy the following pod into the example project, it is not be created: Example Pod object with an invalid node selector apiVersion: v1 kind: Pod metadata: name: west-region #... spec: nodeSelector: region: west #... 4.7.2. Using node selectors to control pod placement You can use node selectors on pods and labels on nodes to control where the pod is scheduled. With node selectors, OpenShift Container Platform schedules the pods on nodes that contain matching labels. You add labels to a node, a machine set, or a machine config. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. To add node selectors to an existing pod, add a node selector to the controlling object for that pod, such as a ReplicaSet object, DaemonSet object, StatefulSet object, Deployment object, or DeploymentConfig object. Any existing pods under that controlling object are recreated on a node with a matching label. If you are creating a new pod, you can add the node selector directly to the pod spec. If the pod does not have a controlling object, you must delete the pod, edit the pod spec, and recreate the pod. Note You cannot add a node selector directly to an existing scheduled pod. Prerequisites To add a node selector to existing pods, determine the controlling object for that pod. For example, the router-default-66d5cf9464-m2g75 pod is controlled by the router-default-66d5cf9464 replica set: USD oc describe pod router-default-66d5cf9464-7pwkc Example output kind: Pod apiVersion: v1 metadata: #... Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress # ... Controlled By: ReplicaSet/router-default-66d5cf9464 # ... The web console lists the controlling object under ownerReferences in the pod YAML: apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc # ... ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true # ... Procedure Add labels to a node by using a machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" #... Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: # ... template: metadata: # ... spec: metadata: labels: region: east type: user-node # ... Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: "user-node" region: "east" #... Verify that the labels are added to the node: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.24.0 Add the matching node selector to a pod: To add a node selector to existing and future pods, add a node selector to the controlling object for the pods: Example ReplicaSet object with labels kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 # ... spec: # ... template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1 #... 1 Add the node selector. To add a node selector to a specific, new pod, add the selector to the Pod object directly: Example Pod object with a node selector apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 #... spec: nodeSelector: region: east type: user-node #... Note You cannot add a node selector directly to an existing scheduled pod. 4.7.3. Creating default cluster-wide node selectors You can use default cluster-wide node selectors on pods together with labels on nodes to constrain all pods created in a cluster to specific nodes. With cluster-wide node selectors, when you create a pod in that cluster, OpenShift Container Platform adds the default node selectors to the pod and schedules the pod on nodes with matching labels. You configure cluster-wide node selectors by editing the Scheduler Operator custom resource (CR). You add labels to a node, a machine set, or a machine config. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. Note You can add additional key/value pairs to a pod. But you cannot add a different value for a default key. Procedure To add a default cluster-wide node selector: Edit the Scheduler Operator CR to add the default cluster-wide node selectors: USD oc edit scheduler cluster Example Scheduler Operator CR with a node selector apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false 1 Add a node selector with the appropriate <key>:<value> pairs. After making this change, wait for the pods in the openshift-kube-apiserver project to redeploy. This can take several minutes. The default cluster-wide node selector does not take effect until the pods redeploy. Add labels to a node by using a machine set or editing the node directly: Use a machine set to add labels to nodes managed by the machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api 1 1 Add a <key>/<value> pair for each label. For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api Example MachineSet object apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node ... Redeploy the nodes associated with that machine set by scaling down to 0 and scaling up the nodes: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.24.0 Add labels directly to a node: Edit the Node object for the node: USD oc label nodes <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the node using the oc get command: USD oc get nodes -l <key>=<value>,<key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.24.0 4.7.4. Creating project-wide node selectors You can use node selectors in a project together with labels on nodes to constrain all pods created in that project to the labeled nodes. When you create a pod in this project, OpenShift Container Platform adds the node selectors to the pods in the project and schedules the pods on a node with matching labels in the project. If there is a cluster-wide default node selector, a project node selector takes preference. You add node selectors to a project by editing the Namespace object to add the openshift.io/node-selector parameter. You add labels to a node, a machine set, or a machine config. Adding the label to the machine set ensures that if the node or machine goes down, new nodes have the label. Labels added to a node or machine config do not persist if the node or machine goes down. A pod is not scheduled if the Pod object contains a node selector, but no project has a matching node selector. When you create a pod from that spec, you receive an error similar to the following message: Example error message Error from server (Forbidden): error when creating "pod.yaml": pods "pod-4" is forbidden: pod node label selector conflicts with its project node label selector Note You can add additional key/value pairs to a pod. But you cannot add a different value for a project key. Procedure To add a default project node selector: Create a namespace or edit an existing namespace to add the openshift.io/node-selector parameter: USD oc edit namespace <name> Example output apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "type=user-node,region=east" 1 openshift.io/description: "" openshift.io/display-name: "" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: "2021-05-10T12:35:04Z" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: "145537" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes 1 Add the openshift.io/node-selector with the appropriate <key>:<value> pairs. Add labels to a node by using a machine set or editing the node directly: Use a MachineSet object to add labels to nodes managed by the machine set when a node is created: Run the following command to add labels to a MachineSet object: USD oc patch MachineSet <name> --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"<key>"="<value>","<key>"="<value>"}}]' -n openshift-machine-api For example: USD oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"type":"user-node","region":"east"}}]' -n openshift-machine-api Tip You can alternatively apply the following YAML to add labels to a machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: "east" type: "user-node" Verify that the labels are added to the MachineSet object by using the oc edit command: For example: USD oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: ... spec: ... template: metadata: ... spec: metadata: labels: region: east type: user-node Redeploy the nodes associated with that machine set: For example: USD oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api USD oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api When the nodes are ready and available, verify that the label is added to the nodes by using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.24.0 Add labels directly to a node: Edit the Node object to add labels: USD oc label <resource> <name> <key>=<value> For example, to label a node: USD oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east Tip You can alternatively apply the following YAML to add labels to a node: kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: "user-node" region: "east" Verify that the labels are added to the Node object using the oc get command: USD oc get nodes -l <key>=<value> For example: USD oc get nodes -l type=user-node,region=east Example output NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.24.0 Additional resources Creating a project with a node selector and toleration 4.8. Controlling pod placement by using pod topology spread constraints You can use pod topology spread constraints to control the placement of your pods across nodes, zones, regions, or other user-defined topology domains. 4.8.1. About pod topology spread constraints By using a pod topology spread constraint , you provide fine-grained control over the distribution of pods across failure domains to help achieve high availability and more efficient resource utilization. OpenShift Container Platform administrators can label nodes to provide topology information, such as regions, zones, nodes, or other user-defined domains. After these labels are set on nodes, users can then define pod topology spread constraints to control the placement of pods across these topology domains. You specify which pods to group together, which topology domains they are spread among, and the acceptable skew. Only pods within the same namespace are matched and grouped together when spreading due to a constraint. 4.8.2. Configuring pod topology spread constraints The following steps demonstrate how to configure pod topology spread constraints to distribute pods that match the specified labels based on their zone. You can specify multiple pod topology spread constraints, but you must ensure that they do not conflict with each other. All pod topology spread constraints must be satisfied for a pod to be placed. Prerequisites A cluster administrator has added the required labels to nodes. Procedure Create a Pod spec and specify a pod topology spread constraint: Example pod-spec.yaml file apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod 1 The maximum difference in number of pods between any two topology domains. The default is 1 , and you cannot specify a value of 0 . 2 The key of a node label. Nodes with this key and identical value are considered to be in the same topology. 3 How to handle a pod if it does not satisfy the spread constraint. The default is DoNotSchedule , which tells the scheduler not to schedule the pod. Set to ScheduleAnyway to still schedule the pod, but the scheduler prioritizes honoring the skew to not make the cluster more imbalanced. 4 Pods that match this label selector are counted and recognized as a group when spreading to satisfy the constraint. Be sure to specify a label selector, otherwise no pods can be matched. 5 Be sure that this Pod spec also sets its labels to match this label selector if you want it to be counted properly in the future. Create the pod: USD oc create -f pod-spec.yaml 4.8.3. Example pod topology spread constraints The following examples demonstrate pod topology spread constraint configurations. 4.8.3.1. Single pod topology spread constraint example This example Pod spec defines one pod topology spread constraint. It matches on pods labeled region: us-east , distributes among zones, specifies a skew of 1 , and does not schedule the pod if it does not meet these requirements. kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod 4.8.3.2. Multiple pod topology spread constraints example This example Pod spec defines two pod topology spread constraints. Both match on pods labeled region: us-east , specify a skew of 1 , and do not schedule the pod if it does not meet these requirements. The first constraint distributes pods based on a user-defined label node , and the second constraint distributes pods based on a user-defined label rack . Both constraints must be met for the pod to be scheduled. kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: "docker.io/ocpqe/hello-pod" name: hello-pod 4.8.4. Additional resources Understanding how to update labels on nodes 4.9. Evicting pods using the descheduler While the scheduler is used to determine the most suitable node to host a new pod, the descheduler can be used to evict a running pod so that the pod can be rescheduled onto a more suitable node. 4.9.1. About the descheduler You can use the descheduler to evict pods based on specific strategies so that the pods can be rescheduled onto more appropriate nodes. You can benefit from descheduling running pods in situations such as the following: Nodes are underutilized or overutilized. Pod and node affinity requirements, such as taints or labels, have changed and the original scheduling decisions are no longer appropriate for certain nodes. Node failure requires pods to be moved. New nodes are added to clusters. Pods have been restarted too many times. Important The descheduler does not schedule replacement of evicted pods. The scheduler automatically performs this task for the evicted pods. When the descheduler decides to evict pods from a node, it employs the following general mechanism: Pods in the openshift-* and kube-system namespaces are never evicted. Critical pods with priorityClassName set to system-cluster-critical or system-node-critical are never evicted. Static, mirrored, or stand-alone pods that are not part of a replication controller, replica set, deployment, or job are never evicted because these pods will not be recreated. Pods associated with daemon sets are never evicted. Pods with local storage are never evicted. Best effort pods are evicted before burstable and guaranteed pods. All types of pods with the descheduler.alpha.kubernetes.io/evict annotation are eligible for eviction. This annotation is used to override checks that prevent eviction, and the user can select which pod is evicted. Users should know how and if the pod will be recreated. Pods subject to pod disruption budget (PDB) are not evicted if descheduling violates its pod disruption budget (PDB). The pods are evicted by using eviction subresource to handle PDB. 4.9.2. Descheduler profiles The following descheduler profiles are available: AffinityAndTaints This profile evicts pods that violate inter-pod anti-affinity, node affinity, and node taints. It enables the following strategies: RemovePodsViolatingInterPodAntiAffinity : removes pods that are violating inter-pod anti-affinity. RemovePodsViolatingNodeAffinity : removes pods that are violating node affinity. RemovePodsViolatingNodeTaints : removes pods that are violating NoSchedule taints on nodes. Pods with a node affinity type of requiredDuringSchedulingIgnoredDuringExecution are removed. TopologyAndDuplicates This profile evicts pods in an effort to evenly spread similar pods, or pods of the same topology domain, among nodes. It enables the following strategies: RemovePodsViolatingTopologySpreadConstraint : finds unbalanced topology domains and tries to evict pods from larger ones when DoNotSchedule constraints are violated. RemoveDuplicates : ensures that there is only one pod associated with a replica set, replication controller, deployment, or job running on same node. If there are more, those duplicate pods are evicted for better pod distribution in a cluster. LifecycleAndUtilization This profile evicts long-running pods and balances resource usage between nodes. It enables the following strategies: RemovePodsHavingTooManyRestarts : removes pods whose containers have been restarted too many times. Pods where the sum of restarts over all containers (including Init Containers) is more than 100. LowNodeUtilization : finds nodes that are underutilized and evicts pods, if possible, from overutilized nodes in the hope that recreation of evicted pods will be scheduled on these underutilized nodes. A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods). A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods). PodLifeTime : evicts pods that are too old. By default, pods that are older than 24 hours are removed. You can customize the pod lifetime value. SoftTopologyAndDuplicates This profile is the same as TopologyAndDuplicates , except that pods with soft topology constraints, such as whenUnsatisfiable: ScheduleAnyway , are also considered for eviction. Note Do not enable both SoftTopologyAndDuplicates and TopologyAndDuplicates . Enabling both results in a conflict. EvictPodsWithLocalStorage This profile allows pods with local storage to be eligible for eviction. EvictPodsWithPVC This profile allows pods with persistent volume claims to be eligible for eviction. 4.9.3. Installing the descheduler The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles. By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions. Important If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to hypershift-control-plane , because it has the lowest priority value ( 100000000 ) of the hosted control plane priority classes. Prerequisites Cluster administrator privileges. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Kube Descheduler Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create . Install the Kube Descheduler Operator. Navigate to Operators OperatorHub . Type Kube Descheduler Operator into the filter box. Select the Kube Descheduler Operator and click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-kube-descheduler-operator from the drop-down menu. Adjust the values for the Update Channel and Approval Strategy to the desired values. Click Install . Create a descheduler instance. From the Operators Installed Operators page, click the Kube Descheduler Operator . Select the Kube Descheduler tab and click Create KubeDescheduler . Edit the settings as necessary. To evict pods instead of simulating the evictions, change the Mode field to Automatic . Expand the Profiles section to select one or more profiles to enable. The AffinityAndTaints profile is enabled by default. Click Add Profile to select additional profiles. Note Do not enable both TopologyAndDuplicates and SoftTopologyAndDuplicates . Enabling both results in a conflict. Optional: Expand the Profile Customizations section to set optional configurations for the descheduler. Set a custom pod lifetime value for the LifecycleAndUtilization profile. Use the podLifetime field to set a numerical value and a valid unit ( s , m , or h ). The default pod lifetime is 24 hours ( 24h ). Set a custom priority threshold to consider pods for eviction only if their priority is lower than a specified priority level. Use the thresholdPriority field to set a numerical priority threshold or use the thresholdPriorityClassName field to specify a certain priority class name. Note Do not specify both thresholdPriority and thresholdPriorityClassName for the descheduler. Set specific namespaces to exclude or include from descheduler operations. Expand the namespaces field and add namespaces to the excluded or included list. You can only either set a list of namespaces to exclude or a list of namespaces to include. Note that protected namespaces ( openshift-* , kube-system , hypershift ) are excluded by default. Important The LowNodeUtilization strategy does not support namespace exclusion. If the LifecycleAndUtilization profile is set, which enables the LowNodeUtilization strategy, then no namespaces are excluded, even the protected namespaces. To avoid evictions from the protected namespaces while the LowNodeUtilization strategy is enabled, set the priority class name to system-cluster-critical or system-node-critical . Experimental: Set thresholds for underutilization and overutilization for the LowNodeUtilization strategy. Use the devLowNodeUtilizationThresholds field to set one of the following values: Low : 10% underutilized and 30% overutilized Medium : 20% underutilized and 50% overutilized (Default) High : 40% underutilized and 70% overutilized Note This setting is experimental and should not be used in a production environment. Optional: Use the Descheduling Interval Seconds field to change the number of seconds between descheduler runs. The default is 3600 seconds. Click Create . You can also configure the profiles and settings for the descheduler later using the OpenShift CLI ( oc ). If you did not adjust the profiles when creating the descheduler instance from the web console, the AffinityAndTaints profile is enabled by default. 4.9.4. Configuring descheduler profiles You can configure which profiles the descheduler uses to evict pods. Prerequisites Cluster administrator privileges Procedure Edit the KubeDescheduler object: USD oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator Specify one or more profiles in the spec.profiles section. apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal mode: Predictive 1 profileCustomizations: namespaces: 2 excluded: - my-namespace podLifetime: 48h 3 thresholdPriorityClassName: my-priority-class-name 4 profiles: 5 - AffinityAndTaints - TopologyAndDuplicates 6 - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC 1 Optional: By default, the descheduler does not evict pods. To evict pods, set mode to Automatic . 2 Optional: Set a list of user-created namespaces to include or exclude from descheduler operations. Use excluded to set a list of namespaces to exclude or use included to set a list of namespaces to include. Note that protected namespaces ( openshift-* , kube-system , hypershift ) are excluded by default. Important The LowNodeUtilization strategy does not support namespace exclusion. If the LifecycleAndUtilization profile is set, which enables the LowNodeUtilization strategy, then no namespaces are excluded, even the protected namespaces. To avoid evictions from the protected namespaces while the LowNodeUtilization strategy is enabled, set the priority class name to system-cluster-critical or system-node-critical . 3 Optional: Enable a custom pod lifetime value for the LifecycleAndUtilization profile. Valid units are s , m , or h . The default pod lifetime is 24 hours. 4 Optional: Specify a priority threshold to consider pods for eviction only if their priority is lower than the specified level. Use the thresholdPriority field to set a numerical priority threshold (for example, 10000 ) or use the thresholdPriorityClassName field to specify a certain priority class name (for example, my-priority-class-name ). If you specify a priority class name, it must already exist or the descheduler will throw an error. Do not set both thresholdPriority and thresholdPriorityClassName . 5 Add one or more profiles to enable. Available profiles: AffinityAndTaints , TopologyAndDuplicates , LifecycleAndUtilization , SoftTopologyAndDuplicates , EvictPodsWithLocalStorage , and EvictPodsWithPVC . 6 Do not enable both TopologyAndDuplicates and SoftTopologyAndDuplicates . Enabling both results in a conflict. You can enable multiple profiles; the order that the profiles are specified in is not important. Save the file to apply the changes. 4.9.5. Configuring the descheduler interval You can configure the amount of time between descheduler runs. The default is 3600 seconds (one hour). Prerequisites Cluster administrator privileges Procedure Edit the KubeDescheduler object: USD oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator Update the deschedulingIntervalSeconds field to the desired value: apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1 ... 1 Set the number of seconds between descheduler runs. A value of 0 in this field runs the descheduler once and exits. Save the file to apply the changes. 4.9.6. Uninstalling the descheduler You can remove the descheduler from your cluster by removing the descheduler instance and uninstalling the Kube Descheduler Operator. This procedure also cleans up the KubeDescheduler CRD and openshift-kube-descheduler-operator namespace. Prerequisites Cluster administrator privileges. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Delete the descheduler instance. From the Operators Installed Operators page, click Kube Descheduler Operator . Select the Kube Descheduler tab. Click the Options menu to the cluster entry and select Delete KubeDescheduler . In the confirmation dialog, click Delete . Uninstall the Kube Descheduler Operator. Navigate to Operators Installed Operators . Click the Options menu to the Kube Descheduler Operator entry and select Uninstall Operator . In the confirmation dialog, click Uninstall . Delete the openshift-kube-descheduler-operator namespace. Navigate to Administration Namespaces . Enter openshift-kube-descheduler-operator into the filter box. Click the Options menu to the openshift-kube-descheduler-operator entry and select Delete Namespace . In the confirmation dialog, enter openshift-kube-descheduler-operator and click Delete . Delete the KubeDescheduler CRD. Navigate to Administration Custom Resource Definitions . Enter KubeDescheduler into the filter box. Click the Options menu to the KubeDescheduler entry and select Delete CustomResourceDefinition . In the confirmation dialog, click Delete . 4.10. Secondary scheduler 4.10.1. Secondary scheduler overview You can install the Secondary Scheduler Operator to run a custom secondary scheduler alongside the default scheduler to schedule pods. 4.10.1.1. About the Secondary Scheduler Operator The Secondary Scheduler Operator for Red Hat OpenShift provides a way to deploy a custom secondary scheduler in OpenShift Container Platform. The secondary scheduler runs alongside the default scheduler to schedule pods. Pod configurations can specify which scheduler to use. The custom scheduler must have the /bin/kube-scheduler binary and be based on the Kubernetes scheduling framework . Important You can use the Secondary Scheduler Operator to deploy a custom secondary scheduler in OpenShift Container Platform, but Red Hat does not directly support the functionality of the custom secondary scheduler. The Secondary Scheduler Operator creates the default roles and role bindings required by the secondary scheduler. You can specify which scheduling plugins to enable or disable by configuring the KubeSchedulerConfiguration resource for the secondary scheduler. 4.10.2. Secondary Scheduler Operator for Red Hat OpenShift release notes The Secondary Scheduler Operator for Red Hat OpenShift allows you to deploy a custom secondary scheduler in your OpenShift Container Platform cluster. These release notes track the development of the Secondary Scheduler Operator for Red Hat OpenShift. For more information, see About the Secondary Scheduler Operator . 4.10.2.1. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.1.0 Issued: 2022-9-1 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.1.0: RHSA-2022:6152 4.10.2.1.1. New features and enhancements The Secondary Scheduler Operator security context configuration has been updated to comply with pod security admission enforcement . 4.10.2.1.2. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( BZ#2071684 ) 4.10.2.2. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.0.1 Issued: 2022-07-28 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.0.1: RHSA-2022:5699 4.10.2.2.1. New features and enhancements The maximum OpenShift Container Platform version for Secondary Scheduler Operator for Red Hat OpenShift 1.0.1 is 4.11. 4.10.2.2.2. Bug fixes Previously, the secondary scheduler deployment was not deleted after the secondary scheduler custom resource (CR) was deleted, which prevented the Secondary Scheduler Operator and operand from being fully uninstalled. The secondary scheduler deployment is now deleted when the secondary scheduler CR is deleted, so that the Secondary Scheduler Operator can now be fully uninstalled. ( BZ#2100923 ) 4.10.2.2.3. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( BZ#2071684 ) 4.10.2.3. Release notes for Secondary Scheduler Operator for Red Hat OpenShift 1.0.0 Issued: 2022-04-18 The following advisory is available for the Secondary Scheduler Operator for Red Hat OpenShift 1.0.0: RHEA-2022:1346 4.10.2.3.1. New features and enhancements This is the initial release of the Secondary Scheduler Operator for Red Hat OpenShift. 4.10.2.3.2. Known issues Currently, you cannot deploy additional resources, such as config maps, CRDs, or RBAC policies through the Secondary Scheduler Operator. Any resources other than roles and role bindings that are required by your custom secondary scheduler must be applied externally. ( BZ#2071684 ) 4.10.3. Scheduling pods using a secondary scheduler You can run a custom secondary scheduler in OpenShift Container Platform by installing the Secondary Scheduler Operator, deploying the secondary scheduler, and setting the secondary scheduler in the pod definition. 4.10.3.1. Installing the Secondary Scheduler Operator You can use the web console to install the Secondary Scheduler Operator for Red Hat OpenShift. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Secondary Scheduler Operator for Red Hat OpenShift. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-secondary-scheduler-operator in the Name field and click Create . Install the Secondary Scheduler Operator for Red Hat OpenShift. Navigate to Operators OperatorHub . Enter Secondary Scheduler Operator for Red Hat OpenShift into the filter box. Select the Secondary Scheduler Operator for Red Hat OpenShift and click Install . On the Install Operator page: The Update channel is set to stable , which installs the latest stable release of the Secondary Scheduler Operator for Red Hat OpenShift. Select A specific namespace on the cluster and select openshift-secondary-scheduler-operator from the drop-down menu. Select an Update approval strategy. The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verification Navigate to Operators Installed Operators . Verify that Secondary Scheduler Operator for Red Hat OpenShift is listed with a Status of Succeeded . 4.10.3.2. Deploying a secondary scheduler After you have installed the Secondary Scheduler Operator, you can deploy a secondary scheduler. Prerequisities You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. The Secondary Scheduler Operator for Red Hat OpenShift is installed. Procedure Log in to the OpenShift Container Platform web console. Create config map to hold the configuration for the secondary scheduler. Navigate to Workloads ConfigMaps . Click Create ConfigMap . In the YAML editor, enter the config map definition that contains the necessary KubeSchedulerConfiguration configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: "secondary-scheduler-config" 1 namespace: "openshift-secondary-scheduler-operator" 2 data: "config.yaml": | apiVersion: kubescheduler.config.k8s.io/v1beta3 kind: KubeSchedulerConfiguration 3 leaderElection: leaderElect: false profiles: - schedulerName: secondary-scheduler 4 plugins: 5 score: disabled: - name: NodeResourcesBalancedAllocation - name: NodeResourcesLeastAllocated 1 The name of the config map. This is used in the Scheduler Config field when creating the SecondaryScheduler CR. 2 The config map must be created in the openshift-secondary-scheduler-operator namespace. 3 The KubeSchedulerConfiguration resource for the secondary scheduler. For more information, see KubeSchedulerConfiguration in the Kubernetes API documentation. 4 The name of the secondary scheduler. Pods that set their spec.schedulerName field to this value are scheduled with this secondary scheduler. 5 The plugins to enable or disable for the secondary scheduler. For a list default scheduling plugins, see Scheduling plugins in the Kubernetes documentation. Click Create . Create the SecondaryScheduler CR: Navigate to Operators Installed Operators . Select Secondary Scheduler Operator for Red Hat OpenShift . Select the Secondary Scheduler tab and click Create SecondaryScheduler . The Name field defaults to cluster ; do not change this name. The Scheduler Config field defaults to secondary-scheduler-config . Ensure that this value matches the name of the config map created earlier in this procedure. In the Scheduler Image field, enter the image name for your custom scheduler. Important Red Hat does not directly support the functionality of your custom secondary scheduler. Click Create . 4.10.3.3. Scheduling a pod using the secondary scheduler To schedule a pod using the secondary scheduler, set the schedulerName field in the pod definition. Prerequisities You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. The Secondary Scheduler Operator for Red Hat OpenShift is installed. A secondary scheduler is configured. Procedure Log in to the OpenShift Container Platform web console. Navigate to Workloads Pods . Click Create Pod . In the YAML editor, enter the desired pod configuration and add the schedulerName field: apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 schedulerName: secondary-scheduler 1 1 The schedulerName field must match the name that is defined in the config map when you configured the secondary scheduler. Click Create . Verification Log in to the OpenShift CLI. Describe the pod using the following command: USD oc describe pod nginx -n default Example output Name: nginx Namespace: default Priority: 0 Node: ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp/10.0.128.3 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s secondary-scheduler Successfully assigned default/nginx to ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp ... In the events table, find the event with a message similar to Successfully assigned <namespace>/<pod_name> to <node_name> . In the "From" column, verify that the event was generated from the secondary scheduler and not the default scheduler. Note You can also check the secondary-scheduler-* pod logs in the openshift-secondary-scheduler-namespace to verify that the pod was scheduled by the secondary scheduler. 4.10.4. Uninstalling the Secondary Scheduler Operator You can remove the Secondary Scheduler Operator for Red Hat OpenShift from OpenShift Container Platform by uninstalling the Operator and removing its related resources. 4.10.4.1. Uninstalling the Secondary Scheduler Operator You can uninstall the Secondary Scheduler Operator for Red Hat OpenShift by using the web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. The Secondary Scheduler Operator for Red Hat OpenShift is installed. Procedure Log in to the OpenShift Container Platform web console. Uninstall the Secondary Scheduler Operator for Red Hat OpenShift Operator. Navigate to Operators Installed Operators . Click the Options menu to the Secondary Scheduler Operator entry and click Uninstall Operator . In the confirmation dialog, click Uninstall . 4.10.4.2. Removing Secondary Scheduler Operator resources Optionally, after uninstalling the Secondary Scheduler Operator for Red Hat OpenShift, you can remove its related resources from your cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Remove CRDs that were installed by the Secondary Scheduler Operator: Navigate to Administration CustomResourceDefinitions . Enter SecondaryScheduler in the Name field to filter the CRDs. Click the Options menu to the SecondaryScheduler CRD and select Delete Custom Resource Definition : Remove the openshift-secondary-scheduler-operator namespace. Navigate to Administration Namespaces . Click the Options menu to the openshift-secondary-scheduler-operator and select Delete Namespace . In the confirmation dialog, enter openshift-secondary-scheduler-operator in the field and click Delete .
|
[
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: mastersSchedulable: false profile: HighNodeUtilization 1 #",
"apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: failure-domain.beta.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod",
"apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: containers: - name: security-s1 image: docker.io/ocpqe/hello-pod",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1-east # spec affinity 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5 #",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: containers: - name: security-s1 image: docker.io/ocpqe/hello-pod",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: security-s2-east # spec affinity 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6 #",
"oc create -f <pod-spec>.yaml",
"apiVersion: v1 kind: Pod metadata: name: team4 labels: team: \"4\" # spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod #",
"apiVersion: v1 kind: Pod metadata: name: team4a # spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - \"4\" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod #",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 # spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod #",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 # spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod #",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 # spec: containers: - name: ocp image: docker.io/ocpqe/hello-pod #",
"apiVersion: v1 kind: Pod metadata: name: pod-s2 # spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod #",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none>",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod #",
"apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod #",
"oc label node node1 e2e-az-name=e2e-az1",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 #",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #",
"oc create -f <file-name>.yaml",
"oc label node node1 e2e-az-name=e2e-az3",
"apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #",
"oc create -f <file-name>.yaml",
"oc label node node1 zone=us",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc get pod -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1",
"oc label node node1 zone=emea",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #",
"cat pod-s1.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #",
"oc describe pod pod-s1",
"Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1).",
"sysctl -a |grep commit",
"# vm.overcommit_memory = 0 #",
"sysctl -a |grep panic",
"# vm.panic_on_oom = 0 #",
"apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes node1 key1=value1:NoSchedule",
"oc adm taint nodes node1 key1=value1:NoExecute",
"oc adm taint nodes node1 key2=value2:NoSchedule",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - operator: \"Exists\" #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 key1=value1:NoExecute",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"oc edit machineset <machineset>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc adm taint nodes node1 dedicated=groupName:NoSchedule",
"kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #",
"kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"<key_name>\"} 3 ]",
"oc apply -f project.yaml",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #",
"oc adm taint nodes <node-name> disktype=ssd:NoSchedule",
"oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule",
"kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #",
"oc adm taint nodes <node-name> <key>-",
"oc adm taint nodes ip-10-0-132-248.ec2.internal key1-",
"node/ip-10-0-132-248.ec2.internal untainted",
"apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux failure-domain.beta.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' failure-domain.beta.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos beta.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 beta.kubernetes.io/arch: amd64 region: east 1 type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #",
"apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #",
"apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>",
"apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #",
"oc describe pod router-default-66d5cf9464-7pwkc",
"kind: Pod apiVersion: v1 metadata: # Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464",
"apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\" #",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc label nodes <name> <key>=<value>",
"oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\" #",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.24.0",
"kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1 #",
"apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 # spec: nodeSelector: region: east type: user-node #",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.24.0",
"oc label nodes <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>,<key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.24.0",
"Error from server (Forbidden): error when creating \"pod.yaml\": pods \"pod-4\" is forbidden: pod node label selector conflicts with its project node label selector",
"oc edit namespace <name>",
"apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"type=user-node,region=east\" 1 openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: \"2021-05-10T12:35:04Z\" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: \"145537\" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes",
"oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api",
"oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"",
"oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: spec: template: metadata: spec: metadata: labels: region: east type: user-node",
"oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.24.0",
"oc label <resource> <name> <key>=<value>",
"oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"",
"oc get nodes -l <key>=<value>",
"oc get nodes -l type=user-node,region=east",
"NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.24.0",
"apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod",
"oc create -f pod-spec.yaml",
"kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod",
"kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod",
"oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal mode: Predictive 1 profileCustomizations: namespaces: 2 excluded: - my-namespace podLifetime: 48h 3 thresholdPriorityClassName: my-priority-class-name 4 profiles: 5 - AffinityAndTaints - TopologyAndDuplicates 6 - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC",
"oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator",
"apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1",
"apiVersion: v1 kind: ConfigMap metadata: name: \"secondary-scheduler-config\" 1 namespace: \"openshift-secondary-scheduler-operator\" 2 data: \"config.yaml\": | apiVersion: kubescheduler.config.k8s.io/v1beta3 kind: KubeSchedulerConfiguration 3 leaderElection: leaderElect: false profiles: - schedulerName: secondary-scheduler 4 plugins: 5 score: disabled: - name: NodeResourcesBalancedAllocation - name: NodeResourcesLeastAllocated",
"apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 schedulerName: secondary-scheduler 1",
"oc describe pod nginx -n default",
"Name: nginx Namespace: default Priority: 0 Node: ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp/10.0.128.3 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s secondary-scheduler Successfully assigned default/nginx to ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/nodes/controlling-pod-placement-onto-nodes-scheduling
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, open a Jira issue that describes your concerns. Provide as much detail as possible so that your request can be addressed quickly. Prerequisites You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure To provide your feedback, perform the following steps: Click the following link: Create Issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide more details about the issue. Include the URL where you found the issue. Provide information for any other required fields. Allow all fields that contain default information to remain at the defaults. Click Create to create the Jira issue for the documentation team. A documentation issue will be created and routed to the appropriate documentation team. Thank you for taking the time to provide feedback.
| null |
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_rhel_system_registration/proc-providing-feedback-on-redhat-documentation
|
20.16.9.2. Bridge to LAN
|
20.16.9.2. Bridge to LAN Note that this is the recommended configuration setting for general guest virtual machine connectivity on host physical machines with static wired networking configurations. Bridge to LAN provides a bridge from the guest virtual machine directly onto the LAN. This assumes there is a bridge device on the host physical machine which has one or more of the host physical machines physical NICs enslaved. The guest virtual machine will have an associated tun device created with a name of <vnetN> , which can also be overridden with the <target> element (refer to Section 20.16.9.11, "Overriding the target element" ). The <tun> device will be enslaved to the bridge. The IP range / network configuration is whatever is used on the LAN. This provides the guest virtual machine full incoming and outgoing net access just like a physical machine. On Linux systems, the bridge device is normally a standard Linux host physical machine bridge. On host physical machines that support Open vSwitch, it is also possible to connect to an open vSwitch bridge device by adding a virtualport type='openvswitch'/ to the interface definition. The Open vSwitch type virtualport accepts two parameters in its parameters element - an interfaceid which is a standard uuid used to uniquely identify this particular interface to Open vSwitch (if you do no specify one, a random interfaceid will be generated for you when you first define the interface), and an optional profileid which is sent to Open vSwitch as the interfaces <port-profile> . To set the bridge to LAN settings, use a management tool that will configure the following part of the domain XML: ... <devices> ... <interface type='bridge'> <source bridge='br0'/> </interface> <interface type='bridge'> <source bridge='br1'/> <target dev='vnet7'/> <mac address="00:11:22:33:44:55"/> </interface> <interface type='bridge'> <source bridge='ovsbr'/> <virtualport type='openvswitch'> <parameters profileid='menial' interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> ... </devices> Figure 20.37. Devices - network interfaces- bridge to LAN
|
[
"<devices> <interface type='bridge'> <source bridge='br0'/> </interface> <interface type='bridge'> <source bridge='br1'/> <target dev='vnet7'/> <mac address=\"00:11:22:33:44:55\"/> </interface> <interface type='bridge'> <source bridge='ovsbr'/> <virtualport type='openvswitch'> <parameters profileid='menial' interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> </devices>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-section-libvirt-dom-xml-devices-network-interfaces-bridge-to-lan
|
Part VIII. Set Up Passivation
|
Part VIII. Set Up Passivation
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/part-set_up_passivation
|
Chapter 1. Overview
|
Chapter 1. Overview AMQ Broker is a high-performance messaging implementation based on ActiveMQ Artemis. It uses an asynchronous journal for fast message persistence, and supports multiple languages, protocols, and platforms. 1.1. Key features AMQ Broker provides the following features: Clustering and high availability options Fast, native-IO persistence Supports local transactions Supports XA transactions when using AMQ Core Protocol JMS and AMQ OpenWire JMS clients Written in Java for broad platform support Multiple management interfaces: AMQ Management Console, Management APIs, and JMX 1.2. Supported standards and protocols AMQ Broker supports the following standards and protocols: Wire protocols: Core Protocol AMQP 1.0 MQTT OpenWire (Used by A-MQ 6 clients) STOMP JMS 2.0 Note The details of distributed transactions (XA) within AMQP are not provided in the 1.0 version of the specification. If your environment requires support for distributed transactions, it is recommended that you use the AMQ Core Protocol JMS. 1.3. Supported configurations Refer to the article " Red Hat AMQ 7 Supported Configurations " on the Red Hat Customer Portal for current information regarding AMQ Broker supported configurations. 1.4. Document conventions This document uses the following conventions for the sudo command, file paths, and replaceable values. The sudo command In this document, sudo is used for any command that requires root privileges. You should always exercise caution when using sudo , as any changes can affect the entire system. For more information about using sudo , see Managing sudo access . About the use of file paths in this document In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/... ). If you are using Microsoft Windows, you should use the equivalent Microsoft Windows paths (for example, C:\Users\... ). Replaceable values This document sometimes uses replaceable values that you must replace with values specific to your environment. Replaceable values are lowercase, enclosed by angle brackets ( < > ), and are styled using italics and monospace font. Multiple words are separated by underscores ( _ ) . For example, in the following command, replace <install_dir> with your own directory name. USD <install_dir> /bin/artemis create mybroker
|
[
"<install_dir> /bin/artemis create mybroker"
] |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/getting_started_with_amq_broker/overview-getting-started
|
Preface
|
Preface Important This Package Manifest provides a list of packages for Red Hat Virtualization 4.4 General Availability and for Batch Updates 1 to 3. The Package Manifests for Red Hat Virtualization 4.4 Batch Update 4 and later releases are available on the Product Software tab on the product download page in the Customer Portal.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/package_manifest/pr01
|
Appendix A. Configuring OpenShift service serving certificates to generate TLS certificates for Keycloak
|
Appendix A. Configuring OpenShift service serving certificates to generate TLS certificates for Keycloak OpenShift's service serving certificate can automate the generation and management of Transport Layer Security (TLS) certificates for use by Keycloak. Infrastructure components, such as the Ingress Controller, within an OpenShift cluster will trust these TLS certificates. Prerequisites Red Hat OpenShift Container Platform version 4.13 or later. Installation of the RHBK operator. Access to the OpenShift web console with the cluster-admin role. Procedure In OpenShift web console, from the Administrator perspective, expand Home from the navigation menu, and click Projects . Search for keycloak , and select the keycloak-system namespace. Create a new service. Click the + icon. In the Import YAML text box, copy the example, and paste it into the text box. Example apiVersion: v1 kind: Service metadata: annotations: service.beta.openshift.io/serving-cert-secret-name: keycloak-tls labels: app: keycloak app.kubernetes.io/instance: keycloak name: keycloak-service-trusted namespace: keycloak-system spec: internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: https port: 8443 selector: app: keycloak app.kubernetes.io/instance: keycloak Click the Create button. Expand Operators from the navigation menu, click Installed Operators , and click Keycloak Operator . In the YAML view of the Keycloak resource, under the spec section, add the ingress property: Example spec: ... ingress: annotations: route.openshift.io/destination-ca-certificate-secret: keycloak-tls route.openshift.io/termination: reencrypt ... By default, the Keycloak operator creates Ingress resources instead of routes. OpenShift automatically creates a route based on the Ingress definition. Specify the name of the secret containing the TLS certificate, under the spec section: Example spec: ... http: tlsSecret: keycloak-tls ... Once Keycloak starts, OpenShift's service serving certificate starts generating TLS certificates for Keycloak. Additional resources Securing service traffic using service serving certificate secrets .
|
[
"apiVersion: v1 kind: Service metadata: annotations: service.beta.openshift.io/serving-cert-secret-name: keycloak-tls labels: app: keycloak app.kubernetes.io/instance: keycloak name: keycloak-service-trusted namespace: keycloak-system spec: internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: https port: 8443 selector: app: keycloak app.kubernetes.io/instance: keycloak",
"spec: ingress: annotations: route.openshift.io/destination-ca-certificate-secret: keycloak-tls route.openshift.io/termination: reencrypt",
"spec: http: tlsSecret: keycloak-tls"
] |
https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1/html/deployment_guide/configuring-openshift-service-serving-certificates-to-generate-tls-certificates-for-keycloak_deploy
|
Chapter 8. Deployments
|
Chapter 8. Deployments 8.1. Understanding Deployment and DeploymentConfig objects The Deployment and DeploymentConfig API objects in OpenShift Container Platform provide two similar but different methods for fine-grained management over common user applications. They are composed of the following separate API objects: A Deployment or DeploymentConfig object, either of which describes the desired state of a particular component of the application as a pod template. Deployment objects involve one or more replica sets , which contain a point-in-time record of the state of a deployment as a pod template. Similarly, DeploymentConfig objects involve one or more replication controllers , which preceded replica sets. One or more pods, which represent an instance of a particular version of an application. Use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects. 8.1.1. Building blocks of a deployment Deployments and deployment configs are enabled by the use of native Kubernetes API objects ReplicaSet and ReplicationController , respectively, as their building blocks. Users do not have to manipulate replica sets, replication controllers, or pods owned by Deployment or DeploymentConfig objects. The deployment systems ensure changes are propagated appropriately. Tip If the existing deployment strategies are not suited for your use case and you must run manual steps during the lifecycle of your deployment, then you should consider creating a custom deployment strategy. The following sections provide further details on these objects. 8.1.1.1. Replica sets A ReplicaSet is a native Kubernetes API object that ensures a specified number of pod replicas are running at any given time. Note Only use replica sets if you require custom update orchestration or do not require updates at all. Otherwise, use deployments. Replica sets can be used independently, but are used by deployments to orchestrate pod creation, deletion, and updates. Deployments manage their replica sets automatically, provide declarative updates to pods, and do not have to manually manage the replica sets that they create. The following is an example ReplicaSet definition: apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 A label query over a set of resources. The result of matchLabels and matchExpressions are logically conjoined. 2 Equality-based selector to specify resources with labels that match the selector. 3 Set-based selector to filter keys. This selects all resources with key equal to tier and value equal to frontend . 8.1.1.2. Replication controllers Similar to a replica set, a replication controller ensures that a specified number of replicas of a pod are running at all times. If pods exit or are deleted, the replication controller instantiates more up to the defined number. Likewise, if there are more running than desired, it deletes as many as necessary to match the defined amount. The difference between a replica set and a replication controller is that a replica set supports set-based selector requirements whereas a replication controller only supports equality-based selector requirements. A replication controller configuration consists of: The number of replicas desired, which can be adjusted at run time. A Pod definition to use when creating a replicated pod. A selector for identifying managed pods. A selector is a set of labels assigned to the pods that are managed by the replication controller. These labels are included in the Pod definition that the replication controller instantiates. The replication controller uses the selector to determine how many instances of the pod are already running in order to adjust as needed. The replication controller does not perform auto-scaling based on load or traffic, as it does not track either. Rather, this requires its replica count to be adjusted by an external auto-scaler. Note Use a DeploymentConfig to create a replication controller instead of creating replication controllers directly. If you require custom orchestration or do not require updates, use replica sets instead of replication controllers. The following is an example definition of a replication controller: apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod the controller creates. 4 Labels on the pod should include those from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 8.1.2. Deployments Kubernetes provides a first-class, native API object type in OpenShift Container Platform called Deployment . Deployment objects describe the desired state of a particular component of an application as a pod template. Deployments create replica sets, which orchestrate pod lifecycles. For example, the following deployment definition creates a replica set to bring up one hello-openshift pod: Deployment definition apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80 8.1.3. DeploymentConfig objects Building on replication controllers, OpenShift Container Platform adds expanded support for the software development and deployment lifecycle with the concept of DeploymentConfig objects. In the simplest case, a DeploymentConfig object creates a new replication controller and lets it start up pods. However, OpenShift Container Platform deployments from DeploymentConfig objects also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller. The DeploymentConfig deployment system provides the following capabilities: A DeploymentConfig object, which is a template for running applications. Triggers that drive automated deployments in response to events. User-customizable deployment strategies to transition from the version to the new version. A strategy runs inside a pod commonly referred as the deployment process. A set of hooks (lifecycle hooks) for executing custom behavior in different points during the lifecycle of a deployment. Versioning of your application to support rollbacks either manually or automatically in case of deployment failure. Manual replication scaling and autoscaling. When you create a DeploymentConfig object, a replication controller is created representing the DeploymentConfig object's pod template. If the deployment changes, a new replication controller is created with the latest pod template, and a deployment process runs to scale down the old replication controller and scale up the new one. Instances of your application are automatically added and removed from both service load balancers and routers as they are created. As long as your application supports graceful shutdown when it receives the TERM signal, you can ensure that running user connections are given a chance to complete normally. The OpenShift Container Platform DeploymentConfig object defines the following details: The elements of a ReplicationController definition. Triggers for creating a new deployment automatically. The strategy for transitioning between deployments. Lifecycle hooks. Each time a deployment is triggered, whether manually or automatically, a deployer pod manages the deployment (including scaling down the old replication controller, scaling up the new one, and running hooks). The deployment pod remains for an indefinite amount of time after it completes the deployment to retain its logs of the deployment. When a deployment is superseded by another, the replication controller is retained to enable easy rollback if needed. Example DeploymentConfig definition apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3 1 A configuration change trigger results in a new replication controller whenever changes are detected in the pod template of the deployment configuration. 2 An image change trigger causes a new deployment to be created each time a new version of the backing image is available in the named image stream. 3 The default Rolling strategy makes a downtime-free transition between deployments. 8.1.4. Comparing Deployment and DeploymentConfig objects Both Kubernetes Deployment objects and OpenShift Container Platform-provided DeploymentConfig objects are supported in OpenShift Container Platform; however, it is recommended to use Deployment objects unless you need a specific feature or behavior provided by DeploymentConfig objects. The following sections go into more detail on the differences between the two object types to further help you decide which type to use. 8.1.4.1. Design One important difference between Deployment and DeploymentConfig objects is the properties of the CAP theorem that each design has chosen for the rollout process. DeploymentConfig objects prefer consistency, whereas Deployments objects take availability over consistency. For DeploymentConfig objects, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted. Manually deleting the node also deletes the corresponding pod. This means that you can not delete the pod to unstick the rollout, as the kubelet is responsible for deleting the associated pod. However, deployment rollouts are driven from a controller manager. The controller manager runs in high availability mode on masters and uses leader election algorithms to value availability over consistency. During a failure it is possible for other masters to act on the same deployment at the same time, but this issue will be reconciled shortly after the failure occurs. 8.1.4.2. Deployment-specific features Rollover The deployment process for Deployment objects is driven by a controller loop, in contrast to DeploymentConfig objects that use deployer pods for every new rollout. This means that the Deployment object can have as many active replica sets as possible, and eventually the deployment controller will scale down all old replica sets and scale up the newest one. DeploymentConfig objects can have at most one deployer pod running, otherwise multiple deployers might conflict when trying to scale up what they think should be the newest replication controller. Because of this, only two replication controllers can be active at any point in time. Ultimately, this results in faster rapid rollouts for Deployment objects. Proportional scaling Because the deployment controller is the sole source of truth for the sizes of new and old replica sets owned by a Deployment object, it can scale ongoing rollouts. Additional replicas are distributed proportionally based on the size of each replica set. DeploymentConfig objects cannot be scaled when a rollout is ongoing because the controller will have issues with the deployer process about the size of the new replication controller. Pausing mid-rollout Deployments can be paused at any point in time, meaning you can also pause ongoing rollouts. However, you currently cannot pause deployer pods; if you try to pause a deployment in the middle of a rollout, the deployer process is not affected and continues until it finishes. 8.1.4.3. DeploymentConfig object-specific features Automatic rollbacks Currently, deployments do not support automatically rolling back to the last successfully deployed replica set in case of a failure. Triggers Deployments have an implicit config change trigger in that every change in the pod template of a deployment automatically triggers a new rollout. If you do not want new rollouts on pod template changes, pause the deployment: USD oc rollout pause deployments/<name> Lifecycle hooks Deployments do not yet support any lifecycle hooks. Custom strategies Deployments do not support user-specified custom deployment strategies. 8.2. Managing deployment processes 8.2.1. Managing DeploymentConfig objects DeploymentConfig objects can be managed from the OpenShift Container Platform web console's Workloads page or using the oc CLI. The following procedures show CLI usage unless otherwise stated. 8.2.1.1. Starting a deployment You can start a rollout to begin the deployment process of your application. Procedure To start a new deployment process from an existing DeploymentConfig object, run the following command: USD oc rollout latest dc/<name> Note If a deployment process is already in progress, the command displays a message and a new replication controller will not be deployed. 8.2.1.2. Viewing a deployment You can view a deployment to get basic information about all the available revisions of your application. Procedure To show details about all recently created replication controllers for the provided DeploymentConfig object, including any currently running deployment process, run the following command: USD oc rollout history dc/<name> To view details specific to a revision, add the --revision flag: USD oc rollout history dc/<name> --revision=1 For more detailed information about a DeploymentConfig object and its latest revision, use the oc describe command: USD oc describe dc <name> 8.2.1.3. Retrying a deployment If the current revision of your DeploymentConfig object failed to deploy, you can restart the deployment process. Procedure To restart a failed deployment process: USD oc rollout retry dc/<name> If the latest revision of it was deployed successfully, the command displays a message and the deployment process is not retried. Note Retrying a deployment restarts the deployment process and does not create a new deployment revision. The restarted replication controller has the same configuration it had when it failed. 8.2.1.4. Rolling back a deployment Rollbacks revert an application back to a revision and can be performed using the REST API, the CLI, or the web console. Procedure To rollback to the last successful deployed revision of your configuration: USD oc rollout undo dc/<name> The DeploymentConfig object's template is reverted to match the deployment revision specified in the undo command, and a new replication controller is started. If no revision is specified with --to-revision , then the last successfully deployed revision is used. Image change triggers on the DeploymentConfig object are disabled as part of the rollback to prevent accidentally starting a new deployment process soon after the rollback is complete. To re-enable the image change triggers: USD oc set triggers dc/<name> --auto Note Deployment configs also support automatically rolling back to the last successful revision of the configuration in case the latest deployment process fails. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations. 8.2.1.5. Executing commands inside a container You can add a command to a container, which modifies the container's startup behavior by overruling the image's ENTRYPOINT . This is different from a lifecycle hook, which instead can be run once per deployment at a specified time. Procedure Add the command parameters to the spec field of the DeploymentConfig object. You can also add an args field, which modifies the command (or the ENTRYPOINT if command does not exist). spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>' For example, to execute the java command with the -jar and /opt/app-root/springboots2idemo.jar arguments: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar 8.2.1.6. Viewing deployment logs Procedure To stream the logs of the latest revision for a given DeploymentConfig object: USD oc logs -f dc/<name> If the latest revision is running or failed, the command returns the logs of the process that is responsible for deploying your pods. If it is successful, it returns the logs from a pod of your application. You can also view logs from older failed deployment processes, if and only if these processes (old replication controllers and their deployer pods) exist and have not been pruned or deleted manually: USD oc logs --version=1 dc/<name> 8.2.1.7. Deployment triggers A DeploymentConfig object can contain triggers, which drive the creation of new deployment processes in response to events inside the cluster. Warning If no triggers are defined on a DeploymentConfig object, a config change trigger is added by default. If triggers are defined as an empty field, deployments must be started manually. Config change deployment triggers The config change trigger results in a new replication controller whenever configuration changes are detected in the pod template of the DeploymentConfig object. Note If a config change trigger is defined on a DeploymentConfig object, the first replication controller is automatically created soon after the DeploymentConfig object itself is created and it is not paused. Config change deployment trigger triggers: - type: "ConfigChange" Image change deployment triggers The image change trigger results in a new replication controller whenever the content of an image stream tag changes (when a new version of the image is pushed). Image change deployment trigger triggers: - type: "ImageChange" imageChangeParams: automatic: true 1 from: kind: "ImageStreamTag" name: "origin-ruby-sample:latest" namespace: "myproject" containerNames: - "helloworld" 1 If the imageChangeParams.automatic field is set to false , the trigger is disabled. With the above example, when the latest tag value of the origin-ruby-sample image stream changes and the new image value differs from the current image specified in the DeploymentConfig object's helloworld container, a new replication controller is created using the new image for the helloworld container. Note If an image change trigger is defined on a DeploymentConfig object (with a config change trigger and automatic=false , or with automatic=true ) and the image stream tag pointed by the image change trigger does not exist yet, the initial deployment process will automatically start as soon as an image is imported or pushed by a build to the image stream tag. 8.2.1.7.1. Setting deployment triggers Procedure You can set deployment triggers for a DeploymentConfig object using the oc set triggers command. For example, to set a image change trigger, use the following command: USD oc set triggers dc/<dc_name> \ --from-image=<project>/<image>:<tag> -c <container_name> 8.2.1.8. Setting deployment resources A deployment is completed by a pod that consumes resources (memory, CPU, and ephemeral storage) on a node. By default, pods consume unbounded node resources. However, if a project specifies default container limits, then pods consume resources up to those limits. Note The minimum memory limit for a deployment is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the recreate, rolling, or custom deployment strategies. Procedure In the following example, each of resources , cpu , memory , and ephemeral-storage is optional: type: "Recreate" resources: limits: cpu: "100m" 1 memory: "256Mi" 2 ephemeral-storage: "1Gi" 3 1 cpu is in CPU units: 100m represents 0.1 CPU units (100 * 1e-3). 2 memory is in bytes: 256Mi represents 268435456 bytes (256 * 2 ^ 20). 3 ephemeral-storage is in bytes: 1Gi represents 1073741824 bytes (2 ^ 30). However, if a quota has been defined for your project, one of the following two items is required: A resources section set with an explicit requests : type: "Recreate" resources: requests: 1 cpu: "100m" memory: "256Mi" ephemeral-storage: "1Gi" 1 The requests object contains the list of resources that correspond to the list of resources in the quota. A limit range defined in your project, where the defaults from the LimitRange object apply to pods created during the deployment process. To set deployment resources, choose one of the above options. Otherwise, deploy pod creation fails, citing a failure to satisfy quota. Additional resources For more information about resource limits and requests, see Understanding managing application memory . 8.2.1.9. Scaling manually In addition to rollbacks, you can exercise fine-grained control over the number of replicas by manually scaling them. Note Pods can also be auto-scaled using the oc autoscale command. Procedure To manually scale a DeploymentConfig object, use the oc scale command. For example, the following command sets the replicas in the frontend DeploymentConfig object to 3 . USD oc scale dc frontend --replicas=3 The number of replicas eventually propagates to the desired and current state of the deployment configured by the DeploymentConfig object frontend . 8.2.1.10. Accessing private repositories from DeploymentConfig objects You can add a secret to your DeploymentConfig object so that it can access images from a private repository. This procedure shows the OpenShift Container Platform web console method. Procedure Create a new project. From the Workloads page, create a secret that contains credentials for accessing a private image repository. Create a DeploymentConfig object. On the DeploymentConfig object editor page, set the Pull Secret and save your changes. 8.2.1.11. Assigning pods to specific nodes You can use node selectors in conjunction with labeled nodes to control pod placement. Cluster administrators can set the default node selector for a project in order to restrict pod placement to specific nodes. As a developer, you can set a node selector on a Pod configuration to restrict nodes even further. Procedure To add a node selector when creating a pod, edit the Pod configuration, and add the nodeSelector value. This can be added to a single Pod configuration, or in a Pod template: apiVersion: v1 kind: Pod spec: nodeSelector: disktype: ssd ... Pods created when the node selector is in place are assigned to nodes with the specified labels. The labels specified here are used in conjunction with the labels added by a cluster administrator. For example, if a project has the type=user-node and region=east labels added to a project by the cluster administrator, and you add the above disktype: ssd label to a pod, the pod is only ever scheduled on nodes that have all three labels. Note Labels can only be set to one value, so setting a node selector of region=west in a Pod configuration that has region=east as the administrator-set default, results in a pod that will never be scheduled. 8.2.1.12. Running a pod with a different service account You can run a pod with a service account other than the default. Procedure Edit the DeploymentConfig object: USD oc edit dc/<deployment_config> Add the serviceAccount and serviceAccountName parameters to the spec field, and specify the service account you want to use: spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account> 8.3. Using deployment strategies Deployment strategies are used to change or upgrade applications without downtime so that users barely notice a change. Because users generally access applications through a route handled by a router, deployment strategies can focus on DeploymentConfig object features or routing features. Strategies that focus on DeploymentConfig object features impact all routes that use the application. Strategies that use router features target individual routes. Most deployment strategies are supported through the DeploymentConfig object, and some additional strategies are supported through router features. 8.3.1. Choosing a deployment strategy Consider the following when choosing a deployment strategy: Long-running connections must be handled gracefully. Database conversions can be complex and must be done and rolled back along with the application. If the application is a hybrid of microservices and traditional components, downtime might be required to complete the transition. You must have the infrastructure to do this. If you have a non-isolated test environment, you can break both new and old versions. A deployment strategy uses readiness checks to determine if a new pod is ready for use. If a readiness check fails, the DeploymentConfig object retries to run the pod until it times out. The default timeout is 10m , a value set in TimeoutSeconds in dc.spec.strategy.*params . 8.3.2. Rolling strategy A rolling deployment slowly replaces instances of the version of an application with instances of the new version of the application. The rolling strategy is the default deployment strategy used if no strategy is specified on a DeploymentConfig object. A rolling deployment typically waits for new pods to become ready via a readiness check before scaling down the old components. If a significant issue occurs, the rolling deployment can be aborted. When to use a rolling deployment: When you want to take no downtime during an application update. When your application supports having old code and new code running at the same time. A rolling deployment means you have both old and new versions of your code running at the same time. This typically requires that your application handle N-1 compatibility. Example rolling strategy definition strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: "20%" 4 maxUnavailable: "10%" 5 pre: {} 6 post: {} 1 The time to wait between individual pod updates. If unspecified, this value defaults to 1 . 2 The time to wait between polling the deployment status after update. If unspecified, this value defaults to 1 . 3 The time to wait for a scaling event before giving up. Optional; the default is 600 . Here, giving up means automatically rolling back to the complete deployment. 4 maxSurge is optional and defaults to 25% if not specified. See the information below the following procedure. 5 maxUnavailable is optional and defaults to 25% if not specified. See the information below the following procedure. 6 pre and post are both lifecycle hooks. The rolling strategy: Executes any pre lifecycle hook. Scales up the new replication controller based on the surge count. Scales down the old replication controller based on the max unavailable count. Repeats this scaling until the new replication controller has reached the desired replica count and the old replication controller has been scaled to zero. Executes any post lifecycle hook. Important When scaling down, the rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment process will eventually time out and result in a deployment failure. The maxUnavailable parameter is the maximum number of pods that can be unavailable during the update. The maxSurge parameter is the maximum number of pods that can be scheduled above the original number of pods. Both parameters can be set to either a percentage (e.g., 10% ) or an absolute value (e.g., 2 ). The default value for both is 25% . These parameters allow the deployment to be tuned for availability and speed. For example: maxUnavailable*=0 and maxSurge*=20% ensures full capacity is maintained during the update and rapid scale up. maxUnavailable*=10% and maxSurge*=0 performs an update using no extra capacity (an in-place update). maxUnavailable*=10% and maxSurge*=10% scales up and down quickly with some potential for capacity loss. Generally, if you want fast rollouts, use maxSurge . If you have to take into account resource quota and can accept partial unavailability, use maxUnavailable . 8.3.2.1. Canary deployments All rolling deployments in OpenShift Container Platform are canary deployments ; a new version (the canary) is tested before all of the old instances are replaced. If the readiness check never succeeds, the canary instance is removed and the DeploymentConfig object will be automatically rolled back. The readiness check is part of the application code and can be as sophisticated as necessary to ensure the new instance is ready to be used. If you must implement more complex checks of the application (such as sending real user workloads to the new instance), consider implementing a custom deployment or using a blue-green deployment strategy. 8.3.2.2. Creating a rolling deployment Rolling deployments are the default type in OpenShift Container Platform. You can create a rolling deployment using the CLI. Procedure Create an application based on the example deployment images found in Quay.io : USD oc new-app quay.io/openshifttest/deployment-example:latest If you have the router installed, make the application available via a route or use the service IP directly. USD oc expose svc/deployment-example Browse to the application at deployment-example.<project>.<router_domain> to verify you see the v1 image. Scale the DeploymentConfig object up to three replicas: USD oc scale dc/deployment-example --replicas=3 Trigger a new deployment automatically by tagging a new version of the example as the latest tag: USD oc tag deployment-example:v2 deployment-example:latest In your browser, refresh the page until you see the v2 image. When using the CLI, the following command shows how many pods are on version 1 and how many are on version 2. In the web console, the pods are progressively added to v2 and removed from v1: USD oc describe dc deployment-example During the deployment process, the new replication controller is incrementally scaled up. After the new pods are marked as ready (by passing their readiness check), the deployment process continues. If the pods do not become ready, the process aborts, and the deployment rolls back to its version. 8.3.2.3. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click on your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 8.3.2.4. Starting a rolling deployment using the Developer perspective You can upgrade an application by starting a rolling deployment. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure In the Topology view of the Developer perspective, click on the application node to see the Overview tab in the side panel. Note that the Update Strategy is set to the default Rolling strategy. In the Actions drop-down menu, select Start Rollout to start a rolling update. The rolling deployment spins up the new version of the application and then terminates the old one. Figure 8.1. Rolling update Additional resources Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 8.3.3. Recreate strategy The recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process. Example recreate strategy definition strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {} 1 recreateParams are optional. 2 pre , mid , and post are lifecycle hooks. The recreate strategy: Executes any pre lifecycle hook. Scales down the deployment to zero. Executes any mid lifecycle hook. Scales up the new deployment. Executes any post lifecycle hook. Important During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure. When to use a recreate deployment: When you must run migrations or other data transformations before your new code starts. When you do not support having new and old versions of your application code running at the same time. When you want to use a RWO volume, which is not supported being shared between multiple replicas. A recreate deployment incurs downtime because, for a brief period, no instances of your application are running. However, your old code and new code do not run at the same time. 8.3.3.1. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click on your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 8.3.3.2. Starting a recreate deployment using the Developer perspective You can switch the deployment strategy from the default rolling update to a recreate update using the Developer perspective in the web console. Prerequisites Ensure that you are in the Developer perspective of the web console. Ensure that you have created an application using the Add view and see it deployed in the Topology view. Procedure To switch to a recreate update strategy and to upgrade an application: In the Actions drop-down menu, select Edit Deployment Config to see the deployment configuration details of the application. In the YAML editor, change the spec.strategy.type to Recreate and click Save . In the Topology view, select the node to see the Overview tab in the side panel. The Update Strategy is now set to Recreate . Use the Actions drop-down menu to select Start Rollout to start an update using the recreate strategy. The recreate strategy first terminates pods for the older version of the application and then spins up pods for the new version. Figure 8.2. Recreate update Additional resources Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view 8.3.4. Custom strategy The custom strategy allows you to provide your own deployment behavior. Example custom strategy definition strategy: type: Custom customParams: image: organization/strategy command: [ "command", "arg1" ] environment: - name: ENV_1 value: VALUE_1 In the above example, the organization/strategy container image provides the deployment behavior. The optional command array overrides any CMD directive specified in the image's Dockerfile . The optional environment variables provided are added to the execution environment of the strategy process. Additionally, OpenShift Container Platform provides the following environment variables to the deployment process: Environment variable Description OPENSHIFT_DEPLOYMENT_NAME The name of the new deployment, a replication controller. OPENSHIFT_DEPLOYMENT_NAMESPACE The name space of the new deployment. The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user. Alternatively, use the customParams object to inject the custom deployment logic into the existing deployment strategies. Provide a custom shell script logic and call the openshift-deploy binary. Users do not have to supply their custom deployer container image; in this case, the default OpenShift Container Platform deployer image is used instead: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete This results in following deployment: Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete If the custom deployment strategy process requires access to the OpenShift Container Platform API or the Kubernetes API the container that executes the strategy can use the service account token available inside the container for authentication. 8.3.4.1. Editing a deployment by using the Developer perspective You can edit the deployment strategy, image settings, environment variables, and advanced options for your deployment by using the Developer perspective. Prerequisites You are in the Developer perspective of the web console. You have created an application. Procedure Navigate to the Topology view. Click on your application to see the Details panel. In the Actions drop-down menu, select Edit Deployment to view the Edit Deployment page. You can edit the following Advanced options for your deployment: Optional: You can pause rollouts by clicking Pause rollouts , and then selecting the Pause rollouts for this deployment checkbox. By pausing rollouts, you can make changes to your application without triggering a rollout. You can resume rollouts at any time. Optional: Click Scaling to change the number of instances of your image by modifying the number of Replicas . Click Save . 8.3.5. Lifecycle hooks The rolling and recreate strategies support lifecycle hooks , or deployment hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy: Example pre lifecycle hook pre: failurePolicy: Abort execNewPod: {} 1 1 execNewPod is a pod-based lifecycle hook. Every hook has a failure policy , which defines the action the strategy should take when a hook failure is encountered: Abort The deployment process will be considered a failure if the hook fails. Retry The hook execution should be retried until it succeeds. Ignore Any hook failure should be ignored and the deployment should proceed. Hooks have a type-specific field that describes how to execute the hook. Currently, pod-based hooks are the only supported hook type, specified by the execNewPod field. Pod-based lifecycle hook Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a DeploymentConfig object. The following simplified example deployment uses the rolling strategy. Triggers and some other minor details are omitted for brevity: kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ "/usr/bin/command", "arg1", "arg2" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4 1 The helloworld name refers to spec.template.spec.containers[0].name . 2 This command overrides any ENTRYPOINT defined by the openshift/origin-ruby-sample image. 3 env is an optional set of environment variables for the hook container. 4 volumes is an optional set of volume references for the hook container. In this example, the pre hook will be executed in a new pod using the openshift/origin-ruby-sample image from the helloworld container. The hook pod has the following properties: The hook command is /usr/bin/command arg1 arg2 . The hook container has the CUSTOM_VAR1=custom_value1 environment variable. The hook failure policy is Abort , meaning the deployment process fails if the hook fails. The hook pod inherits the data volume from the DeploymentConfig object pod. 8.3.5.1. Setting lifecycle hooks You can set lifecycle hooks, or deployment hooks, for a deployment using the CLI. Procedure Use the oc set deployment-hook command to set the type of hook you want: --pre , --mid , or --post . For example, to set a pre-deployment hook: USD oc set deployment-hook dc/frontend \ --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \ --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2 8.4. Using route-based deployment strategies Deployment strategies provide a way for the application to evolve. Some strategies use Deployment objects to make changes that are seen by users of all routes that resolve to the application. Other advanced strategies, such as the ones described in this section, use router features in conjunction with Deployment objects to impact specific routes. The most common route-based strategy is to use a blue-green deployment . The new version (the green version) is brought up for testing and evaluation, while the users still use the stable version (the blue version). When ready, the users are switched to the green version. If a problem arises, you can switch back to the blue version. A common alternative strategy is to use A/B versions that are both active at the same time and some users use one version, and some users use the other version. This can be used for experimenting with user interface changes and other features to get user feedback. It can also be used to verify proper operation in a production context where problems impact a limited number of users. A canary deployment tests the new version but when a problem is detected it quickly falls back to the version. This can be done with both of the above strategies. The route-based deployment strategies do not scale the number of pods in the services. To maintain desired performance characteristics the deployment configurations might have to be scaled. 8.4.1. Proxy shards and traffic splitting In production environments, you can precisely control the distribution of traffic that lands on a particular shard. When dealing with large numbers of instances, you can use the relative scale of individual shards to implement percentage based traffic. That combines well with a proxy shard , which forwards or splits the traffic it receives to a separate service or application running elsewhere. In the simplest configuration, the proxy forwards requests unchanged. In more complex setups, you can duplicate the incoming requests and send to both a separate cluster as well as to a local instance of the application, and compare the result. Other patterns include keeping the caches of a DR installation warm, or sampling incoming traffic for analysis purposes. Any TCP (or UDP) proxy could be run under the desired shard. Use the oc scale command to alter the relative number of instances serving requests under the proxy shard. For more complex traffic management, consider customizing the OpenShift Container Platform router with proportional balancing capabilities. 8.4.2. N-1 compatibility Applications that have new code and old code running at the same time must be careful to ensure that data written by the new code can be read and handled (or gracefully ignored) by the old version of the code. This is sometimes called schema evolution and is a complex problem. This can take many forms: data stored on disk, in a database, in a temporary cache, or that is part of a user's browser session. While most web applications can support rolling deployments, it is important to test and design your application to handle it. For some applications, the period of time that old code and new code is running side by side is short, so bugs or some failed user transactions are acceptable. For others, the failure pattern may result in the entire application becoming non-functional. One way to validate N-1 compatibility is to use an A/B deployment: run the old code and new code at the same time in a controlled way in a test environment, and verify that traffic that flows to the new deployment does not cause failures in the old deployment. 8.4.3. Graceful termination OpenShift Container Platform and Kubernetes give application instances time to shut down before removing them from load balancing rotations. However, applications must ensure they cleanly terminate user connections as well before they exit. On shutdown, OpenShift Container Platform sends a TERM signal to the processes in the container. Application code, on receiving SIGTERM , stop accepting new connections. This ensures that load balancers route traffic to other active instances. The application code then waits until all open connections are closed, or gracefully terminate individual connections at the opportunity, before exiting. After the graceful termination period expires, a process that has not exited is sent the KILL signal, which immediately ends the process. The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period (default 30 seconds) and can be customized per application as necessary. 8.4.4. Blue-green deployments Blue-green deployments involve running two versions of an application at the same time and moving traffic from the in-production version (the blue version) to the newer version (the green version). You can use a rolling strategy or switch services in a route. Because many applications depend on persistent data, you must have an application that supports N-1 compatibility , which means it shares data and implements live migration between the database, store, or disk by creating two copies of the data layer. Consider the data used in testing the new version. If it is the production data, a bug in the new version can break the production version. 8.4.4.1. Setting up a blue-green deployment Blue-green deployments use two Deployment objects. Both are running, and the one in production depends on the service the route specifies, with each Deployment object exposed to a different service. Note Routes are intended for web (HTTP and HTTPS) traffic, so this technique is best suited for web applications. You can create a new route to the new version and test it. When ready, change the service in the production route to point to the new service and the new (green) version is live. If necessary, you can roll back to the older (blue) version by switching the service back to the version. Procedure Create two independent application components. Create a copy of the example application running the v1 image under the example-blue service: USD oc new-app openshift/deployment-example:v1 --name=example-blue Create a second copy that uses the v2 image under the example-green service: USD oc new-app openshift/deployment-example:v2 --name=example-green Create a route that points to the old service: USD oc expose svc/example-blue --name=bluegreen-example Browse to the application at bluegreen-example-<project>.<router_domain> to verify you see the v1 image. Edit the route and change the service name to example-green : USD oc patch route/bluegreen-example -p '{"spec":{"to":{"name":"example-green"}}}' To verify that the route has changed, refresh the browser until you see the v2 image. 8.4.5. A/B deployments The A/B deployment strategy lets you try a new version of the application in a limited way in the production environment. You can specify that the production version gets most of the user requests while a limited fraction of requests go to the new version. Because you control the portion of requests to each version, as testing progresses you can increase the fraction of requests to the new version and ultimately stop using the version. As you adjust the request load on each version, the number of pods in each service might have to be scaled as well to provide the expected performance. In addition to upgrading software, you can use this feature to experiment with versions of the user interface. Since some users get the old version and some the new, you can evaluate the user's reaction to the different versions to inform design decisions. For this to be effective, both the old and new versions must be similar enough that both can run at the same time. This is common with bug fix releases and when new features do not interfere with the old. The versions require N-1 compatibility to properly work together. OpenShift Container Platform supports N-1 compatibility through the web console as well as the CLI. 8.4.5.1. Load balancing for A/B testing The user sets up a route with multiple services. Each service handles a version of the application. Each service is assigned a weight and the portion of requests to each service is the service_weight divided by the sum_of_weights . The weight for each service is distributed to the service's endpoints so that the sum of the endpoint weights is the service weight . The route can have up to four services. The weight for the service can be between 0 and 256 . When the weight is 0 , the service does not participate in load-balancing but continues to serve existing persistent connections. When the service weight is not 0 , each endpoint has a minimum weight of 1 . Because of this, a service with a lot of endpoints can end up with higher weight than intended. In this case, reduce the number of pods to get the expected load balance weight . Procedure To set up the A/B environment: Create the two applications and give them different names. Each creates a Deployment object. The applications are versions of the same program; one is usually the current production version and the other the proposed new version. Create the first application. The following example creates an application called ab-example-a : USD oc new-app openshift/deployment-example --name=ab-example-a Create the second application: USD oc new-app openshift/deployment-example:v2 --name=ab-example-b Both applications are deployed and services are created. Make the application available externally via a route. At this point, you can expose either. It can be convenient to expose the current production version first and later modify the route to add the new version. USD oc expose svc/ab-example-a Browse to the application at ab-example-a.<project>.<router_domain> to verify that you see the expected version. When you deploy the route, the router balances the traffic according to the weights specified for the services. At this point, there is a single service with default weight=1 so all requests go to it. Adding the other service as an alternateBackends and adjusting the weights brings the A/B setup to life. This can be done by the oc set route-backends command or by editing the route. Note When using alternateBackends , also use the roundrobin load-balancing strategy to ensure requests are distributed as expected to the services based on weight. roundrobin can be set for a route by using a route annotation . Setting the oc set route-backend to 0 means the service does not participate in load-balancing, but continues to serve existing persistent connections. Note Changes to the route just change the portion of traffic to the various services. You might have to scale the deployment to adjust the number of pods to handle the anticipated loads. To edit the route, run: USD oc edit route <route_name> Example output ... metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15 ... 8.4.5.1.1. Managing weights of an existing route using the web console Procedure Navigate to the Networking Routes page. Click the Actions menu to the route you want to edit and select Edit Route . Edit the YAML file. Update the weight to be an integer between 0 and 256 that specifies the relative weight of the target against other target reference objects. The value 0 suppresses requests to this back end. The default is 100 . Run oc explain routes.spec.alternateBackends for more information about the options. Click Save . 8.4.5.1.2. Managing weights of an new route using the web console Navigate to the Networking Routes page. Click Create Route . Enter the route Name . Select the Service . Click Add Alternate Service . Enter a value for Weight and Alternate Service Weight . Enter a number between 0 and 255 that depicts relative weight compared with other targets. The default is 100 . Select the Target Port . Click Create . 8.4.5.1.3. Managing weights using the CLI Procedure To manage the services and corresponding weights load balanced by the route, use the oc set route-backends command: USD oc set route-backends ROUTENAME \ [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options] For example, the following sets ab-example-a as the primary service with weight=198 and ab-example-b as the first alternate service with a weight=2 : USD oc set route-backends ab-example ab-example-a=198 ab-example-b=2 This means 99% of traffic is sent to service ab-example-a and 1% to service ab-example-b . This command does not scale the deployment. You might be required to do so to have enough pods to handle the request load. Run the command with no flags to verify the current configuration: USD oc set route-backends ab-example Example output NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%) To alter the weight of an individual service relative to itself or to the primary service, use the --adjust flag. Specifying a percentage adjusts the service relative to either the primary or the first alternate (if you specify the primary). If there are other backends, their weights are kept proportional to the changed. The following example alters the weight of ab-example-a and ab-example-b services: USD oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10 Alternatively, alter the weight of a service by specifying a percentage: USD oc set route-backends ab-example --adjust ab-example-b=5% By specifying + before the percentage declaration, you can adjust a weighting relative to the current setting. For example: USD oc set route-backends ab-example --adjust ab-example-b=+15% The --equal flag sets the weight of all services to 100 : USD oc set route-backends ab-example --equal The --zero flag sets the weight of all services to 0 . All requests then return with a 503 error. Note Not all routers may support multiple or weighted backends. 8.4.5.1.4. One service, multiple Deployment objects Procedure Create a new application, adding a label ab-example=true that will be common to all shards: USD oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\=shardA USD oc delete svc/ab-example-a The application is deployed and a service is created. This is the first shard. Make the application available via a route, or use the service IP directly: USD oc expose deployment ab-example-a --name=ab-example --selector=ab-example\=true USD oc expose service ab-example Browse to the application at ab-example-<project_name>.<router_domain> to verify you see the v1 image. Create a second shard based on the same source image and label as the first shard, but with a different tagged version and unique environment variables: USD oc new-app openshift/deployment-example:v2 \ --name=ab-example-b --labels=ab-example=true \ SUBTITLE="shard B" COLOR="red" --as-deployment-config=true USD oc delete svc/ab-example-b At this point, both sets of pods are being served under the route. However, because both browsers (by leaving a connection open) and the router (by default, through a cookie) attempt to preserve your connection to a back-end server, you might not see both shards being returned to you. To force your browser to one or the other shard: Use the oc scale command to reduce replicas of ab-example-a to 0 . USD oc scale dc/ab-example-a --replicas=0 Refresh your browser to show v2 and shard B (in red). Scale ab-example-a to 1 replica and ab-example-b to 0 : USD oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0 Refresh your browser to show v1 and shard A (in blue). If you trigger a deployment on either shard, only the pods in that shard are affected. You can trigger a deployment by changing the SUBTITLE environment variable in either Deployment object: USD oc edit dc/ab-example-a or USD oc edit dc/ab-example-b
|
[
"apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always",
"apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80",
"apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3",
"oc rollout pause deployments/<name>",
"oc rollout latest dc/<name>",
"oc rollout history dc/<name>",
"oc rollout history dc/<name> --revision=1",
"oc describe dc <name>",
"oc rollout retry dc/<name>",
"oc rollout undo dc/<name>",
"oc set triggers dc/<name> --auto",
"spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'",
"spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar",
"oc logs -f dc/<name>",
"oc logs --version=1 dc/<name>",
"triggers: - type: \"ConfigChange\"",
"triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"",
"oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>",
"type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3",
"type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"",
"oc scale dc frontend --replicas=3",
"apiVersion: v1 kind: Pod spec: nodeSelector: disktype: ssd",
"oc edit dc/<deployment_config>",
"spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>",
"strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}",
"oc new-app quay.io/openshifttest/deployment-example:latest",
"oc expose svc/deployment-example",
"oc scale dc/deployment-example --replicas=3",
"oc tag deployment-example:v2 deployment-example:latest",
"oc describe dc deployment-example",
"strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}",
"strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1",
"strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete",
"Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete",
"pre: failurePolicy: Abort execNewPod: {} 1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4",
"oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2",
"oc new-app openshift/deployment-example:v1 --name=example-blue",
"oc new-app openshift/deployment-example:v2 --name=example-green",
"oc expose svc/example-blue --name=bluegreen-example",
"oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'",
"oc new-app openshift/deployment-example --name=ab-example-a",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b",
"oc expose svc/ab-example-a",
"oc edit route <route_name>",
"metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15",
"oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]",
"oc set route-backends ab-example ab-example-a=198 ab-example-b=2",
"oc set route-backends ab-example",
"NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)",
"oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10",
"oc set route-backends ab-example --adjust ab-example-b=5%",
"oc set route-backends ab-example --adjust ab-example-b=+15%",
"oc set route-backends ab-example --equal",
"oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA oc delete svc/ab-example-a",
"oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true oc expose service ab-example",
"oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true oc delete svc/ab-example-b",
"oc scale dc/ab-example-a --replicas=0",
"oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0",
"oc edit dc/ab-example-a",
"oc edit dc/ab-example-b"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/building_applications/deployments
|
3.4. NotifyingFuture
|
3.4. NotifyingFuture The NotifyingFuture interface is being deprecated in JBoss Data Grid 6.6.0 in favor of the standard Java 8 CompletableFuture , and is expected to be removed in JBoss Data Grid 7.0. NotifyingFuture is currently returned by all async methods in both Cache and RemoteCache , and will require applications using these methods to use the new class once available. Report a bug
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.0_release_notes/notifyingfuture
|
8.3. Snapshot Creation
|
8.3. Snapshot Creation In Red Hat Virtualization the initial snapshot for a virtual machine is different from subsequent snapshots in that the initial snapshot retains its format, either QCOW2 or raw. The first snapshot for a virtual machine uses existing volumes as a base image. Additional snapshots are additional COW layers tracking the changes made to the data stored in the image since the snapshot. As depicted in Figure 8.1, "Initial Snapshot Creation" , the creation of a snapshot causes the volumes that comprise a virtual disk to serve as the base image for all subsequent snapshots. Figure 8.1. Initial Snapshot Creation Snapshots taken after the initial snapshot result in the creation of new COW volumes in which data that is created or changed after the snapshot is taken will be stored. Each newly created COW layer contains only COW metadata. Data that is created by using and operating the virtual machine after a snapshot is taken is written to this new COW layer. When a virtual machine is used to modify data that exists in a COW layer, the data is read from the layer, and written into the newest layer. Virtual machines locate data by checking each COW layer from most recent to oldest, transparently to the virtual machine. Figure 8.2. Additional Snapshot Creation
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/Snapshot_Creation
|
Chapter 6. Performing and configuring basic builds
|
Chapter 6. Performing and configuring basic builds The following sections provide instructions for basic build operations, including starting and canceling builds, editing BuildConfigs , deleting BuildConfigs , viewing build details, and accessing build logs. 6.1. Starting a build You can manually start a new build from an existing build configuration in your current project. Procedure To start a build manually, enter the following command: USD oc start-build <buildconfig_name> 6.1.1. Re-running a build You can manually re-run a build using the --from-build flag. Procedure To manually re-run a build, enter the following command: USD oc start-build --from-build=<build_name> 6.1.2. Streaming build logs You can specify the --follow flag to stream the build's logs in stdout . Procedure To manually stream a build's logs in stdout , enter the following command: USD oc start-build <buildconfig_name> --follow 6.1.3. Setting environment variables when starting a build You can specify the --env flag to set any desired environment variable for the build. Procedure To specify a desired environment variable, enter the following command: USD oc start-build <buildconfig_name> --env=<key>=<value> 6.1.4. Starting a build with source Rather than relying on a Git source pull for a build, you can also start a build by directly pushing your source, which could be the contents of a Git or SVN working directory, a set of pre-built binary artifacts you want to deploy, or a single file. This can be done by specifying one of the following options for the start-build command: Option Description --from-dir=<directory> Specifies a directory that will be archived and used as a binary input for the build. --from-file=<file> Specifies a single file that will be the only file in the build source. The file is placed in the root of an empty directory with the same file name as the original file provided. --from-repo=<local_source_repo> Specifies a path to a local repository to use as the binary input for a build. Add the --commit option to control which branch, tag, or commit is used for the build. When passing any of these options directly to the build, the contents are streamed to the build and override the current build source settings. Note Builds triggered from binary input will not preserve the source on the server, so rebuilds triggered by base image changes will use the source specified in the build configuration. Procedure To start a build from a source code repository and send the contents of a local Git repository as an archive from the tag v2 , enter the following command: USD oc start-build hello-world --from-repo=../hello-world --commit=v2 6.2. Canceling a build You can cancel a build using the web console, or with the following CLI command. Procedure To manually cancel a build, enter the following command: USD oc cancel-build <build_name> 6.2.1. Canceling multiple builds You can cancel multiple builds with the following CLI command. Procedure To manually cancel multiple builds, enter the following command: USD oc cancel-build <build1_name> <build2_name> <build3_name> 6.2.2. Canceling all builds You can cancel all builds from the build configuration with the following CLI command. Procedure To cancel all builds, enter the following command: USD oc cancel-build bc/<buildconfig_name> 6.2.3. Canceling all builds in a given state You can cancel all builds in a given state, such as new or pending , while ignoring the builds in other states. Procedure To cancel all in a given state, enter the following command: USD oc cancel-build bc/<buildconfig_name> 6.3. Editing a BuildConfig To edit your build configurations, you use the Edit BuildConfig option in the Builds view of the Developer perspective. You can use either of the following views to edit a BuildConfig : The Form view enables you to edit your BuildConfig using the standard form fields and checkboxes. The YAML view enables you to edit your BuildConfig with full control over the operations. You can switch between the Form view and YAML view without losing any data. The data in the Form view is transferred to the YAML view and vice versa. Procedure In the Builds view of the Developer perspective, click the Options menu to see the Edit BuildConfig option. Click Edit BuildConfig to see the Form view option. In the Git section, enter the Git repository URL for the codebase you want to use to create an application. The URL is then validated. Optional: Click Show Advanced Git Options to add details such as: Git Reference to specify a branch, tag, or commit that contains code you want to use to build the application. Context Dir to specify the subdirectory that contains code you want to use to build the application. Source Secret to create a Secret Name with credentials for pulling your source code from a private repository. In the Build from section, select the option that you would like to build from. You can use the following options: Image Stream tag references an image for a given image stream and tag. Enter the project, image stream, and tag of the location you would like to build from and push to. Image Stream image references an image for a given image stream and image name. Enter the image stream image you would like to build from. Also enter the project, image stream, and tag to push to. Docker image : The Docker image is referenced through a Docker image repository. You will also need to enter the project, image stream, and tag to refer to where you would like to push to. Optional: In the Environment Variables section, add the environment variables associated with the project by using the Name and Value fields. To add more environment variables, use Add Value , or Add from ConfigMap and Secret . Optional: To further customize your application, use the following advanced options: Trigger Triggers a new image build when the builder image changes. Add more triggers by clicking Add Trigger and selecting the Type and Secret . Secrets Adds secrets for your application. Add more secrets by clicking Add secret and selecting the Secret and Mount point . Policy Click Run policy to select the build run policy. The selected policy determines the order in which builds created from the build configuration must run. Hooks Select Run build hooks after image is built to run commands at the end of the build and verify the image. Add Hook type , Command , and Arguments to append to the command. Click Save to save the BuildConfig . 6.4. Deleting a BuildConfig You can delete a BuildConfig using the following command. Procedure To delete a BuildConfig , enter the following command: USD oc delete bc <BuildConfigName> This also deletes all builds that were instantiated from this BuildConfig . To delete a BuildConfig and keep the builds instatiated from the BuildConfig , specify the --cascade=false flag when you enter the following command: USD oc delete --cascade=false bc <BuildConfigName> 6.5. Viewing build details You can view build details with the web console or by using the oc describe CLI command. This displays information including: The build source. The build strategy. The output destination. Digest of the image in the destination registry. How the build was created. If the build uses the Source strategy, the oc describe output also includes information about the source revision used for the build, including the commit ID, author, committer, and message. Procedure To view build details, enter the following command: USD oc describe build <build_name> 6.6. Accessing build logs You can access build logs using the web console or the CLI. Procedure To stream the logs using the build directly, enter the following command: USD oc describe build <build_name> 6.6.1. Accessing BuildConfig logs You can access BuildConfig logs using the web console or the CLI. Procedure To stream the logs of the latest build for a BuildConfig , enter the following command: USD oc logs -f bc/<buildconfig_name> 6.6.2. Accessing BuildConfig logs for a given version build You can access logs for a given version build for a BuildConfig using the web console or the CLI. Procedure To stream the logs for a given version build for a BuildConfig , enter the following command: USD oc logs --version=<number> bc/<buildconfig_name> 6.6.3. Enabling log verbosity You can enable a more verbose output by passing the BUILD_LOGLEVEL environment variable as part of the sourceStrategy in a BuildConfig . Note An administrator can set the default build verbosity for the entire OpenShift Dedicated instance by configuring env/BUILD_LOGLEVEL . This default can be overridden by specifying BUILD_LOGLEVEL in a given BuildConfig . You can specify a higher priority override on the command line for non-binary builds by passing --build-loglevel to oc start-build . Available log levels for source builds are as follows: Level 0 Produces output from containers running the assemble script and all encountered errors. This is the default. Level 1 Produces basic information about the executed process. Level 2 Produces very detailed information about the executed process. Level 3 Produces very detailed information about the executed process, and a listing of the archive contents. Level 4 Currently produces the same information as level 3. Level 5 Produces everything mentioned on levels and additionally provides docker push messages. Procedure To enable more verbose output, pass the BUILD_LOGLEVEL environment variable as part of the sourceStrategy or dockerStrategy in a BuildConfig : sourceStrategy: ... env: - name: "BUILD_LOGLEVEL" value: "2" 1 1 Adjust this value to the desired log level.
|
[
"oc start-build <buildconfig_name>",
"oc start-build --from-build=<build_name>",
"oc start-build <buildconfig_name> --follow",
"oc start-build <buildconfig_name> --env=<key>=<value>",
"oc start-build hello-world --from-repo=../hello-world --commit=v2",
"oc cancel-build <build_name>",
"oc cancel-build <build1_name> <build2_name> <build3_name>",
"oc cancel-build bc/<buildconfig_name>",
"oc cancel-build bc/<buildconfig_name>",
"oc delete bc <BuildConfigName>",
"oc delete --cascade=false bc <BuildConfigName>",
"oc describe build <build_name>",
"oc describe build <build_name>",
"oc logs -f bc/<buildconfig_name>",
"oc logs --version=<number> bc/<buildconfig_name>",
"sourceStrategy: env: - name: \"BUILD_LOGLEVEL\" value: \"2\" 1"
] |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/builds_using_buildconfig/basic-build-operations
|
Chapter 8. Senders and receivers
|
Chapter 8. Senders and receivers The client uses sender and receiver links to represent channels for delivering messages. Senders and receivers are unidirectional, with a source end for the message origin, and a target end for the message destination. Source and targets often point to queues or topics on a message broker. Sources are also used to represent subscriptions. 8.1. Creating queues and topics on demand Some message servers support on-demand creation of queues and topics. When a sender or receiver is attached, the server uses the sender target address or the receiver source address to create a queue or topic with a name matching the address. The message server typically defaults to creating either a queue (for one-to-one message delivery) or a topic (for one-to-many message delivery). The client can indicate which it prefers by setting the queue or topic capability on the source or target. To select queue or topic semantics, follow these steps: Configure your message server for automatic creation of queues and topics. This is often the default configuration. Set either the queue or topic capability on your sender target or receiver source, as in the examples below. Example: Sending to a queue created on demand void on_container_start(proton::container& cont) override { proton::connection conn = cont.connect("amqp://example.com"); proton::sender_options opts {}; proton::target_options topts {}; topts.capabilities(std::vector<proton::symbol> { "queue" }); opts.target(topts) ; conn.open_sender("jobs", opts ); } Example: Receiving from a topic created on demand void on_container_start(proton::container& cont) override { proton::connection conn = cont.connect("amqp://example.com"); proton::receiver_options opts {}; proton::source_options sopts {}; sopts.capabilities(std::vector<proton::symbol> { "topic" }); opts.source(sopts); conn.open_receiver("notifications", opts ); } For more details, see the following examples: queue-send.cpp queue-receive.cpp topic-send.cpp topic-receive.cpp 8.2. Creating durable subscriptions A durable subscription is a piece of state on the remote server representing a message receiver. Ordinarily, message receivers are discarded when a client closes. However, because durable subscriptions are persistent, clients can detach from them and then re-attach later. Any messages received while detached are available when the client re-attaches. Durable subscriptions are uniquely identified by combining the client container ID and receiver name to form a subscription ID. These must have stable values so that the subscription can be recovered. To create a durable subscription, follow these steps: Set the connection container ID to a stable value, such as client-1 : proton::container cont {handler, "client-1"}; Create a receiver with a stable name, such as sub-1 , and configure the receiver source for durability by setting the durability_mode and expiry_policy options: void on_container_start(proton::container& cont) override { proton::connection conn = cont.connect("amqp://example.com"); proton::receiver_options opts {}; proton::source_options sopts {}; opts.name("sub-1"); sopts.durability_mode(proton::source::UNSETTLED_STATE); sopts.expiry_policy(proton::source::NEVER); opts.source(sopts); conn.open_receiver("notifications", opts); } To detach from a subscription, use the proton::receiver::detach() method. To terminate the subscription, use the proton::receiver::close() method. For more information, see the durable-subscribe.cpp example . 8.3. Creating shared subscriptions A shared subscription is a piece of state on the remote server representing one or more message receivers. Because it is shared, multiple clients can consume from the same stream of messages. The client configures a shared subscription by setting the shared capability on the receiver source. Shared subscriptions are uniquely identified by combining the client container ID and receiver name to form a subscription ID. These must have stable values so that multiple client processes can locate the same subscription. If the global capability is set in addition to shared , the receiver name alone is used to identify the subscription. To create a durable subscription, follow these steps: Set the connection container ID to a stable value, such as client-1 : proton::container cont {handler, "client-1"}; Create a receiver with a stable name, such as sub-1 , and configure the receiver source for sharing by setting the shared capability: void on_container_start(proton::container& cont) override { proton::connection conn = cont.connect("amqp://example.com"); proton::receiver_options opts {}; proton::source_options sopts {}; opts.name("sub-1"); sopts.capabilities(std::vector<proton::symbol> { "shared" }); opts.source(sopts); conn.open_receiver("notifications", opts); } To detach from a subscription, use the proton::receiver::detach() method. To terminate the subscription, use the proton::receiver::close() method. For more information, see the shared-subscribe.cpp example .
|
[
"void on_container_start(proton::container& cont) override { proton::connection conn = cont.connect(\"amqp://example.com\"); proton::sender_options opts {}; proton::target_options topts {}; topts.capabilities(std::vector<proton::symbol> { \"queue\" }); opts.target(topts) ; conn.open_sender(\"jobs\", opts ); }",
"void on_container_start(proton::container& cont) override { proton::connection conn = cont.connect(\"amqp://example.com\"); proton::receiver_options opts {}; proton::source_options sopts {}; sopts.capabilities(std::vector<proton::symbol> { \"topic\" }); opts.source(sopts); conn.open_receiver(\"notifications\", opts ); }",
"proton::container cont {handler, \"client-1\"};",
"void on_container_start(proton::container& cont) override { proton::connection conn = cont.connect(\"amqp://example.com\"); proton::receiver_options opts {}; proton::source_options sopts {}; opts.name(\"sub-1\"); sopts.durability_mode(proton::source::UNSETTLED_STATE); sopts.expiry_policy(proton::source::NEVER); opts.source(sopts); conn.open_receiver(\"notifications\", opts); }",
"proton::container cont {handler, \"client-1\"};",
"void on_container_start(proton::container& cont) override { proton::connection conn = cont.connect(\"amqp://example.com\"); proton::receiver_options opts {}; proton::source_options sopts {}; opts.name(\"sub-1\"); sopts.capabilities(std::vector<proton::symbol> { \"shared\" }); opts.source(sopts); conn.open_receiver(\"notifications\", opts); }"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_cpp_client/senders_and_receivers
|
5.16. binutils
|
5.16. binutils 5.16.1. RHBA-2012:0872 - binutils bug fix and enhancement update Updated binutils packages that fix two bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. The binutils packages contain a collection of binary utilities, including "ar" (for creating, modifying and extracting from archives), "as" (a family of GNU assemblers), "gprof" (for displaying call graph profile data), "ld" (the GNU linker), "nm" (for listing symbols from object files), "objcopy" (for copying and translating object files), "objdump" (for displaying information from object files), "ranlib" (for generating an index for the contents of an archive), "readelf" (for displaying detailed information about binary files), "size" (for listing the section sizes of an object or archive file), "strings" (for listing printable strings from files), "strip" (for discarding symbols), and "addr2line" (for converting addresses to file and line). Bug Fixes BZ# 676194 Previously, the GNU linker could terminate unexpectedly with a segmentation fault when attempting to link together object files of different architectures (for example, an object file of 32-bit Intel P6 with an object file of Intel 64). This update modifies binutils so that the linker now generates an error message and refuses to link object files in the scenario described. BZ# 809616 When generating build-ID hashes, the GNU linker previously allocated memory for BSS sections. Consequently, the linker could use more memory than was necessary. This update modifies the linker to skip BSS sections and thus avoid unnecessary memory usage when generating build-ID hashes. Enhancements BZ# 739444 With this update, backported patches have been included to support new AMD processors. Also, a duplicate entry for the bextr instruction has been removed from the disassembler's table. BZ# 739144 The GNU linker has been modified in order to improve performance of (TOC) addressability and Procedure Linkage Table (PLT) call stubs on the PowerPC and PowerPC 64 architectures. All users of binutils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/binutils
|
3.13. Considerations for ricci
|
3.13. Considerations for ricci For Red Hat Enterprise Linux 6, ricci replaces ccsd . Therefore, it is necessary that ricci is running in each cluster node to be able to propagate updated cluster configuration whether it is by means of the cman_tool version -r command, the ccs command, or the luci user interface server. You can start ricci by using service ricci start or by enabling it to start at boot time by means of chkconfig . For information on enabling IP ports for ricci , see Section 3.3.1, "Enabling IP Ports on Cluster Nodes" . For the Red Hat Enterprise Linux 6.1 release and later, using ricci requires a password the first time you propagate updated cluster configuration from any particular node. You set the ricci password as root after you install ricci on your system. To set this password, execute the passwd ricci command, for user ricci .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-ricci-considerations-CA
|
Installing OpenShift Serverless
|
Installing OpenShift Serverless Red Hat OpenShift Serverless 1.35 Installing the Serverless Operator, Knative CLI, Knative Serving, and Knative Eventing Red Hat OpenShift Documentation Team
|
[
"--- apiVersion: v1 kind: Namespace metadata: name: openshift-serverless --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: serverless-operators namespace: openshift-serverless spec: {} --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: serverless-operator namespace: openshift-serverless spec: channel: stable 1 name: serverless-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4",
"oc apply -f serverless-subscription.yaml",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE serverless-operator.v1.25.0 Red Hat OpenShift Serverless 1.25.0 serverless-operator.v1.24.0 Succeeded",
"kn: No such file or directory",
"tar -xf <file>",
"echo USDPATH",
"oc get ConsoleCLIDownload",
"NAME DISPLAY NAME AGE kn kn - OpenShift Serverless Command Line Interface (CLI) 2022-09-20T08:41:18Z oc-cli-downloads oc - OpenShift Command Line Interface (CLI) 2022-09-20T08:00:20Z",
"oc get route -n openshift-serverless",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD kn kn-openshift-serverless.apps.example.com knative-openshift-metrics-3 http-cli edge/Redirect None",
"subscription-manager register",
"subscription-manager refresh",
"subscription-manager attach --pool=<pool_id> 1",
"subscription-manager repos --enable=\"openshift-serverless-1-for-rhel-8-x86_64-rpms\"",
"subscription-manager repos --enable=\"openshift-serverless-1-for-rhel-8-s390x-rpms\"",
"subscription-manager repos --enable=\"openshift-serverless-1-for-rhel-8-ppc64le-rpms\"",
"yum install openshift-serverless-clients",
"dnf install 'dnf-command(versionlock)'",
"dnf versionlock add --raw 'openshift-serverless-clients-1.7.*'",
"dnf search --showduplicates openshift-serverless-clients",
"dnf versionlock delete openshift-serverless-clients",
"dnf versionlock add --raw 'openshift-serverless-clients-1.8.*'",
"dnf install --upgrade openshift-serverless-clients",
"kn: No such file or directory",
"tar -xf <filename>",
"echo USDPATH",
"echo USDPATH",
"C:\\> path",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving",
"oc apply -f serving.yaml",
"oc get knativeserving.operator.knative.dev/knative-serving -n knative-serving --template='{{range .status.conditions}}{{printf \"%s=%s\\n\" .type .status}}{{end}}'",
"DependenciesInstalled=True DeploymentsAvailable=True InstallSucceeded=True Ready=True",
"oc get pods -n knative-serving",
"NAME READY STATUS RESTARTS AGE activator-67ddf8c9d7-p7rm5 2/2 Running 0 4m activator-67ddf8c9d7-q84fz 2/2 Running 0 4m autoscaler-5d87bc6dbf-6nqc6 2/2 Running 0 3m59s autoscaler-5d87bc6dbf-h64rl 2/2 Running 0 3m59s autoscaler-hpa-77f85f5cc4-lrts7 2/2 Running 0 3m57s autoscaler-hpa-77f85f5cc4-zx7hl 2/2 Running 0 3m56s controller-5cfc7cb8db-nlccl 2/2 Running 0 3m50s controller-5cfc7cb8db-rmv7r 2/2 Running 0 3m18s domain-mapping-86d84bb6b4-r746m 2/2 Running 0 3m58s domain-mapping-86d84bb6b4-v7nh8 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-bkcnj 2/2 Running 0 3m58s domainmapping-webhook-769d679d45-fff68 2/2 Running 0 3m58s storage-version-migration-serving-serving-0.26.0--1-6qlkb 0/1 Completed 0 3m56s webhook-5fb774f8d8-6bqrt 2/2 Running 0 3m57s webhook-5fb774f8d8-b8lt5 2/2 Running 0 3m57s",
"oc get pods -n knative-serving-ingress",
"NAME READY STATUS RESTARTS AGE net-kourier-controller-7d4b6c5d95-62mkf 1/1 Running 0 76s net-kourier-controller-7d4b6c5d95-qmgm2 1/1 Running 0 76s 3scale-kourier-gateway-6688b49568-987qz 1/1 Running 0 75s 3scale-kourier-gateway-6688b49568-b5tnp 1/1 Running 0 75s",
"apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing",
"oc apply -f eventing.yaml",
"oc get knativeeventing.operator.knative.dev/knative-eventing -n knative-eventing --template='{{range .status.conditions}}{{printf \"%s=%s\\n\" .type .status}}{{end}}'",
"InstallSucceeded=True Ready=True",
"oc get pods -n knative-eventing",
"NAME READY STATUS RESTARTS AGE broker-controller-58765d9d49-g9zp6 1/1 Running 0 7m21s eventing-controller-65fdd66b54-jw7bh 1/1 Running 0 7m31s eventing-webhook-57fd74b5bd-kvhlz 1/1 Running 0 7m31s imc-controller-5b75d458fc-ptvm2 1/1 Running 0 7m19s imc-dispatcher-64f6d5fccb-kkc4c 1/1 Running 0 7m18s",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true 1 bootstrapServers: <bootstrap_servers> 2 source: enabled: true 3 broker: enabled: true 4 defaultConfig: bootstrapServers: <bootstrap_servers> 5 numPartitions: <num_partitions> 6 replicationFactor: <replication_factor> 7 sink: enabled: true 8 logging: level: INFO 9",
"oc get pods -n knative-eventing",
"NAME READY STATUS RESTARTS AGE kafka-broker-dispatcher-7769fbbcbb-xgffn 2/2 Running 0 44s kafka-broker-receiver-5fb56f7656-fhq8d 2/2 Running 0 44s kafka-channel-dispatcher-84fd6cb7f9-k2tjv 2/2 Running 0 44s kafka-channel-receiver-9b7f795d5-c76xr 2/2 Running 0 44s kafka-controller-6f95659bf6-trd6r 2/2 Running 0 44s kafka-source-dispatcher-6bf98bdfff-8bcsn 2/2 Running 0 44s kafka-webhook-eventing-68dc95d54b-825xs 2/2 Running 0 44s",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-eventing spec: channel: enabled: true 1 bootstrapServers: <bootstrap_servers> 2 source: enabled: true 3 broker: enabled: true 4 defaultConfig: bootstrapServers: <bootstrap_servers> 5 numPartitions: <num_partitions> 6 replicationFactor: <replication_factor> 7 sink: enabled: true 8 logging: level: INFO 9",
"oc get pods -n knative-eventing",
"NAME READY STATUS RESTARTS AGE kafka-broker-dispatcher-7769fbbcbb-xgffn 2/2 Running 0 44s kafka-broker-receiver-5fb56f7656-fhq8d 2/2 Running 0 44s kafka-channel-dispatcher-84fd6cb7f9-k2tjv 2/2 Running 0 44s kafka-channel-receiver-9b7f795d5-c76xr 2/2 Running 0 44s kafka-controller-6f95659bf6-trd6r 2/2 Running 0 44s kafka-source-dispatcher-6bf98bdfff-8bcsn 2/2 Running 0 44s kafka-webhook-eventing-68dc95d54b-825xs 2/2 Running 0 44s",
"apiVersion: operator.serverless.openshift.io/v1alpha1 kind: KnativeKafka metadata: name: knative-kafka namespace: knative-kafka spec: config: deployment: \"kube-rbac-proxy-cpu-request\": \"10m\" 1 \"kube-rbac-proxy-memory-request\": \"20Mi\" 2 \"kube-rbac-proxy-cpu-limit\": \"100m\" 3 \"kube-rbac-proxy-memory-limit\": \"100Mi\" 4",
"systemctl start --user podman.socket",
"export DOCKER_HOST=\"unix://USD{XDG_RUNTIME_DIR}/podman/podman.sock\"",
"kn func build -v",
"podman machine init --memory=8192 --cpus=2 --disk-size=20",
"podman machine start Starting machine \"podman-machine-default\" Waiting for VM Mounting volume... /Users/myuser:/Users/user [...truncated output...] You can still connect Docker API clients by setting DOCKER_HOST using the following command in your terminal session: export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock' Machine \"podman-machine-default\" started successfully",
"export DOCKER_HOST='unix:///Users/myuser/.local/share/containers/podman/machine/podman-machine-default/podman.sock'",
"kn func build -v",
"kn: No such file or directory",
"tar xvzf <tar_archive>",
"mv <filename> kn-workflow",
"chmod +x <path/to/downloaded/kn-workflow>",
"mv <path/to/downloaded/kn-workflow> /usr/local/bin/kn-workflow",
"kn plugin list",
"tar xvzf <tar_archive>",
"mv <filename> kn-workflow",
"chmod +x <path/to/downloaded/kn-workflow>",
"mv <path/to/downloaded/kn-workflow> /usr/local/bin/kn-workflow",
"kn plugin list",
"Expand-Archive -Path <filename>.zip -DestinationPath <destination>",
"Rename-Item -Path <destination>\\<filename>.exe -NewName kn-workflow.exe",
"Copy-Item -Path <destination>\\kn-workflow.exe -Destination \"C:\\Program Files\\kn-workflow.exe\"",
"kn plugin list",
"podman login registry.redhat.io",
"export KN_IMAGE=registry.redhat.io/openshift-serverless-1/logic-kn-workflow-cli-artifacts-rhel8:1.33.0",
"export KN_CONTAINER_ID=USD(podman run -di USDKN_IMAGE)",
"podman cp USDKN_CONTAINER_ID:<path_to_binary> .",
"podman stop USDKN_CONTAINER_ID",
"podman rm USDKN_CONTAINER_ID",
"tar xvzf kn-workflow-linux-amd64.tar.gz",
"mv kn kn-workflow",
"cp path/to/downloaded/kn-workflow /usr/local/bin/kn-workflow",
"chmod +x /usr/local/bin/kn-workflow",
"kn plugin list",
"kn-workflow",
"Manage OpenShift Serverless Logic Workflow projects Usage: kn workflow [command] Aliases: kn workflow, kn-workflow Available Commands: completion Generate the autocompletion script for the specified shell create Creates a new OpenShift Serverless Logic Workflow project deploy Deploy an OpenShift Serverless Logic Workflow project on Kubernetes via SonataFlow Operator help Help about any command quarkus Manage OpenShift Serverless Logic Workflow projects built in Quarkus run Run an OpenShift Serverless Logic Workflow project in development mode undeploy Undeploy an OpenShift Serverless Logic Workflow project on Kubernetes via SonataFlow Operator version Show the version Flags: -h, --help help for kn -v, --version version for kn Use \"kn [command] --help\" for more information about a command.",
"stable stable-1.28 +--------------+ +--------------------------------------------+ | | | | | +--------+ | corresponds to | +--------+ +--------+ +--------+ | | | 1.28.0 |----------------------> | 1.28.0 | | 1.28.1 | | 1.28.2 | | | +--------+ | | +--------+ +--------+ +--------+ | | | | | ^ | | | | +-----|-------------------|------------|-----+ | +--------+ | created| |upgrades | | | 1.28.1 | | from | hotfix_xyz |to | | +--------+ | | +------------+ | | | | +-->| |--+ | | | | | | | +--------+ | upgrades to +------------+ | | | 1.29.0 |<----------------------------------------------------------+ | +--------+ | | | | | | +--------+ | | | 1.30.0 | | | +--------+ | | | +--------------+",
"stable stable-1.28 +--------------+ +--------------+ | | | | | +--------+ | | +--------+ | | | 1.28.0 | | | | 1.28.0 | | | +--------+ | | +--------+ | | | | | | | | | | | | +--------+ | | | | | | 1.29.0 |<-------- | v | | +--------+ | | | +--------+ | | | +---------| 1.28.1 | | | | | +--------+ | | +--------+ | | | | | 1.30.0 | | | | | +--------+ | | | | | | | +--------------+ +--------------+",
"stable stable-1.29 +--------------+ +--------------+ | | | | | +--------+ | | +--------+ | | | 1.29.0 | | | | 1.29.0 | | | +--------+ | | +--------+ | | | | | | | | | v | | +--------+ | | +--------+ | | | 1.29.1 | | | | 1.29.1 | | | +--------+ | | +--------+ | | | | | | | | | | | | +--------+ | | | | | | 1.30.0 |<---------------------+ | | +--------+ | | | | | | | +--------------+ +--------------+",
"stable-1.29 stable-1.30 +--------------+ +--------------+ | | | | | +--------+ | | +--------+ | | | 1.29.0 | | ------> | 1.30.0 | | | +--------+ | | | +--------+ | | | | | | | | | | | | +--------+ | | | | | | 1.29.1 |-------+ | | | +--------+ | | | | | | | +--------------+ +--------------+",
"The installed KnativeServing version is v1.5.0.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: serverless-operator namespace: openshift-serverless spec: channel: stable name: serverless-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual startingCSV: serverless-operator.v1.26.0",
"oc apply -f serverless-subscription.yaml"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html-single/installing_openshift_serverless/index
|
Assessing and remediating system issues using Red Hat Insights Tasks with FedRAMP
|
Assessing and remediating system issues using Red Hat Insights Tasks with FedRAMP Red Hat Insights 1-latest Use predefined Insights Tasks playbooks to resolve issues on your systems Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_remediating_system_issues_using_red_hat_insights_tasks_with_fedramp/index
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/installing_and_using_red_hat_build_of_openjdk_11_for_windows/making-open-source-more-inclusive
|
5.3. Load Balancing Policy: VM_Evenly_Distributed
|
5.3. Load Balancing Policy: VM_Evenly_Distributed A virtual machine evenly distributed load balancing policy distributes virtual machines evenly between hosts based on a count of the virtual machines. The high virtual machine count is the maximum number of virtual machines that can run on each host, beyond which qualifies as overloading the host. The VM_Evenly_Distributed policy allows an administrator to set a high virtual machine count for hosts. The maximum inclusive difference in virtual machine count between the most highly-utilized host and the least-utilized host is also set by an administrator. The cluster is balanced when every host in the cluster has a virtual machine count that falls inside this migration threshold. The administrator also sets the number of slots for virtual machines to be reserved on SPM hosts. The SPM host will have a lower load than other hosts, so this variable defines how many fewer virtual machines than other hosts it can run. If any host is running more virtual machines than the high virtual machine count and at least one host has a virtual machine count that falls outside of the migration threshold, virtual machines are migrated one by one to the host in the cluster that has the lowest CPU utilization. One virtual machine is migrated at a time until every host in the cluster has a virtual machine count that falls within the migration threshold.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/load_balancing_policy_vm_even_distribution
|
Chapter 4. Configuring SSSD to use LDAP and require TLS authentication
|
Chapter 4. Configuring SSSD to use LDAP and require TLS authentication The System Security Services Daemon (SSSD) is a daemon that manages identity data retrieval and authentication on a Red Hat Enterprise Linux host. A system administrator can configure the host to use a standalone LDAP server as the user account database. The administrator can also specify the requirement that the connection with the LDAP server must be encrypted with a TLS certificate. Note The SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector, namely a man-in-the-middle (MITM) attack which could allow you to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, you should enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. 4.1. An OpenLDAP client using SSSD to retrieve data from LDAP in an encrypted way The authentication method of the LDAP objects can be either a Kerberos password or an LDAP password. Note that the questions of authentication and authorization of the LDAP objects are not addressed here. Important Configuring SSSD with LDAP is a complex procedure requiring a high level of expertise in SSSD and LDAP. Consider using an integrated and automated solution such as Active Directory or Red Hat Identity Management (IdM) instead. For details about IdM, see Planning Identity Management . Identity :leveloffset: +1
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/configuring-sssd-to-use-ldap-and-require-tls-authentication_configuring-authentication-and-authorization-in-rhel
|
Chapter 13. Pruning objects to reclaim resources
|
Chapter 13. Pruning objects to reclaim resources Over time, API objects created in OpenShift Dedicated can accumulate in the cluster's etcd data store through normal user operations, such as when building and deploying applications. A user with the dedicated-admin role can periodically prune older versions of objects from the cluster that are no longer required. For example, by pruning images you can delete older images and layers that are no longer in use, but are still taking up disk space. 13.1. Basic pruning operations The CLI groups prune operations under a common parent command: USD oc adm prune <object_type> <options> This specifies: The <object_type> to perform the action on, such as groups , builds , deployments , or images . The <options> supported to prune that object type. 13.2. Pruning groups To prune groups records from an external provider, administrators can run the following command: USD oc adm prune groups \ --sync-config=path/to/sync/config [<options>] Table 13.1. oc adm prune groups flags Options Description --confirm Indicate that pruning should occur, instead of performing a dry-run. --blacklist Path to the group blacklist file. --whitelist Path to the group whitelist file. --sync-config Path to the synchronization configuration file. Procedure To see the groups that the prune command deletes, run the following command: USD oc adm prune groups --sync-config=ldap-sync-config.yaml To perform the prune operation, add the --confirm flag: USD oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm 13.3. Pruning deployment resources You can prune resources associated with deployments that are no longer required by the system, due to age and status. The following command prunes replication controllers associated with DeploymentConfig objects: USD oc adm prune deployments [<options>] Note To also prune replica sets associated with Deployment objects, use the --replica-sets flag. This flag is currently a Technology Preview feature. Table 13.2. oc adm prune deployments flags Option Description --confirm Indicate that pruning should occur, instead of performing a dry-run. --keep-complete=<N> Per the DeploymentConfig object, keep the last N replication controllers that have a status of Complete and replica count of zero. The default is 5 . --keep-failed=<N> Per the DeploymentConfig object, keep the last N replication controllers that have a status of Failed and replica count of zero. The default is 1 . --keep-younger-than=<duration> Do not prune any replication controller that is younger than <duration> relative to the current time. Valid units of measurement include nanoseconds ( ns ), microseconds ( us ), milliseconds ( ms ), seconds ( s ), minutes ( m ), and hours ( h ). The default is 60m . --orphans Prune all replication controllers that no longer have a DeploymentConfig object, has status of Complete or Failed , and has a replica count of zero. Procedure To see what a pruning operation would delete, run the following command: USD oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m To actually perform the prune operation, add the --confirm flag: USD oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m --confirm 13.4. Pruning builds To prune builds that are no longer required by the system due to age and status, administrators can run the following command: USD oc adm prune builds [<options>] Table 13.3. oc adm prune builds flags Option Description --confirm Indicate that pruning should occur, instead of performing a dry-run. --orphans Prune all builds whose build configuration no longer exists, status is complete, failed, error, or canceled. --keep-complete=<N> Per build configuration, keep the last N builds whose status is complete. The default is 5 . --keep-failed=<N> Per build configuration, keep the last N builds whose status is failed, error, or canceled. The default is 1 . --keep-younger-than=<duration> Do not prune any object that is younger than <duration> relative to the current time. The default is 60m . Procedure To see what a pruning operation would delete, run the following command: USD oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m To actually perform the prune operation, add the --confirm flag: USD oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 \ --keep-younger-than=60m --confirm Note Developers can enable automatic build pruning by modifying their build configuration. 13.5. Automatically pruning images Images from the OpenShift image registry that are no longer required by the system due to age, status, or exceed limits are automatically pruned. Cluster administrators can configure the Pruning Custom Resource, or suspend it. Prerequisites You have access to an OpenShift Dedicated cluster using an account with dedicated-admin permissions. Install the oc CLI. Procedure Verify that the object named imagepruners.imageregistry.operator.openshift.io/cluster contains the following spec and status fields: spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: "True" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: "Periodic image pruner has been created." - type: Scheduled status: "True" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: "Image pruner job has been scheduled." - type: Failed staus: "False" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: "Most recent image pruning job succeeded." 1 schedule : CronJob formatted schedule. This is an optional field, default is daily at midnight. 2 suspend : If set to true , the CronJob running pruning is suspended. This is an optional field, default is false . The initial value on new clusters is false . 3 keepTagRevisions : The number of revisions per tag to keep. This is an optional field, default is 3 . The initial value is 3 . 4 keepYoungerThanDuration : Retain images younger than this duration. This is an optional field. If a value is not specified, either keepYoungerThan or the default value 60m (60 minutes) is used. 5 keepYoungerThan : Deprecated. The same as keepYoungerThanDuration , but the duration is specified as an integer in nanoseconds. This is an optional field. When keepYoungerThanDuration is set, this field is ignored. 6 resources : Standard pod resource requests and limits. This is an optional field. 7 affinity : Standard pod affinity. This is an optional field. 8 nodeSelector : Standard pod node selector. This is an optional field. 9 tolerations : Standard pod tolerations. This is an optional field. 10 successfulJobsHistoryLimit : The maximum number of successful jobs to retain. Must be >= 1 to ensure metrics are reported. This is an optional field, default is 3 . The initial value is 3 . 11 failedJobsHistoryLimit : The maximum number of failed jobs to retain. Must be >= 1 to ensure metrics are reported. This is an optional field, default is 3 . The initial value is 3 . 12 observedGeneration : The generation observed by the Operator. 13 conditions : The standard condition objects with the following types: Available : Indicates if the pruning job has been created. Reasons can be Ready or Error. Scheduled : Indicates if the pruning job has been scheduled. Reasons can be Scheduled, Suspended, or Error. Failed : Indicates if the most recent pruning job failed. Important The Image Registry Operator's behavior for managing the pruner is orthogonal to the managementState specified on the Image Registry Operator's ClusterOperator object. If the Image Registry Operator is not in the Managed state, the image pruner can still be configured and managed by the Pruning Custom Resource. However, the managementState of the Image Registry Operator alters the behavior of the deployed image pruner job: Managed : the --prune-registry flag for the image pruner is set to true . Removed : the --prune-registry flag for the image pruner is set to false , meaning it only prunes image metadata in etcd. 13.6. Pruning cron jobs Cron jobs can perform pruning of successful jobs, but might not properly handle failed jobs. Therefore, the cluster administrator should perform regular cleanup of jobs manually. They should also restrict the access to cron jobs to a small group of trusted users and set appropriate quota to prevent the cron job from creating too many jobs and pods. Additional resources Resource quotas across multiple projects
|
[
"oc adm prune <object_type> <options>",
"oc adm prune groups --sync-config=path/to/sync/config [<options>]",
"oc adm prune groups --sync-config=ldap-sync-config.yaml",
"oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm",
"oc adm prune deployments [<options>]",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"oc adm prune builds [<options>]",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m",
"oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm",
"spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: \"Periodic image pruner has been created.\" - type: Scheduled status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: \"Image pruner job has been scheduled.\" - type: Failed staus: \"False\" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: \"Most recent image pruning job succeeded.\""
] |
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/building_applications/pruning-objects
|
Chapter 2. Configure Red Hat build of OpenJDK 8 in FIPS mode
|
Chapter 2. Configure Red Hat build of OpenJDK 8 in FIPS mode Red Hat build of OpenJDK 8 checks if the FIPS mode is enabled in the system at startup. If yes, it self-configures FIPS according to the global policy. This is the default behavior since RHEL 8.3. RHEL 8 releases require the com.redhat.fips system property set to true as a JVM argument. For example, -Dcom.redhat.fips=true . Note If FIPS mode is enabled in the system while a JVM instance is running, the instance needs to be restarted for changes to take effect. For more information on how to enable FIPS mode, see Switching the system to FIPS mode . You can configure Red Hat build of OpenJDK 8 to bypass the global FIPS alignment. For example, you might want to enable FIPS compliance through a Hardware Security Module (HSM) instead of the scheme provided by Red Hat build of OpenJDK. Following are the FIPS properties for Red Hat build of OpenJDK 8: security.useSystemPropertiesFile Security property located at USDJAVA_HOME/lib/security/java.security or in the file directed to java.security.properties . Privileged access is required to modify the value in the default java.security file. Persistent configuration. When set to false , both the global FIPS and the crypto-policies alignment are disabled. By default, it is set to true . java.security.disableSystemPropertiesFile System property passed to the JVM as an argument. For example, -Djava.security.disableSystemPropertiesFile=true . Non-privileged access is enough. Non-persistent configuration. When set to true , both the global FIPS and the crypto-policies alignment are disabled; generating the same effect than a security.useSystemPropertiesFile=false security property. If both properties are set to different behaviors, java.security.disableSystemPropertiesFile overrides. By default, it is set to false . com.redhat.fips System property passed to a JVM as an argument. For example, -Dcom.redhat.fips=false . Non-privileged access is enough. Non-persistent configuration. When set to false , disables the FIPS alignment while still applying the global crypto-policies. If any of the properties is set to disable the crypto-policies alignment, this property has no effect. In other words, crypto-policies is a prerequisite for FIPS alignment. By default, it is set to true .
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/configuring_red_hat_build_of_openjdk_8_on_rhel_with_fips/config-fips-in-openjdk
|
Chapter 2. Creating a filtered Google Cloud integration
|
Chapter 2. Creating a filtered Google Cloud integration Create a Google Cloud function script that can filter your billing data, store it in object storage, and send the filtered reports to cost management. Important If you created an unfiltered Azure integration, do not complete the following steps. Your Azure integration is already complete. You must have a Red Hat account user with Cloud Administrator permissions before you can add integrations to cost management. To create a Google Cloud integration, you will complete the following tasks: Create a Google Cloud project for your cost management data. Create a bucket for filtered reports. Create a billing service account member with the correct role to export your data to cost management. Create a BigQuery dataset that contains the cost data. Create a billing export that sends the cost management data to your BigQuery dataset. Note Google Cloud is a third-party product and its console and documentation can change. The instructions for configuring the third-party integrations are correct at the time of publishing. For the most up-to-date information, see the Google Cloud Platform documentation . 2.1. Adding your Google Cloud account as an integration You can add your Google Cloud account as an integration. After adding a Google Cloud integration, the cost management application processes the cost and usage data from your Google Cloud account and makes it viewable. Prerequisites To add data integrations to cost management, you must have a Red Hat account with Cloud Administrator permissions. Procedure From Red Hat Hybrid Cloud Console , click Settings Menu > Integrations . On the Settings page, in the Cloud tab, click Add integration . In the Add a cloud integration wizard, select Google Cloud as the cloud provider type and click . Enter a name for your integration. Click . In the Select application step, select Cost management and click . 2.2. Creating a Google Cloud project Create a Google Cloud project to gather and send your cost reports to Red Hat. Prerequisites Access to Google Cloud Console with resourcemanager.projects.create permission Procedure In the Google Cloud Console click IAM & Admin Create a Project . Enter a Project name in the new page that appears and select your billing account. Select the Organization . Enter the parent organization in the Location box. Click Create . In cost management: Pn the Project page, enter your Project ID . To configure Google Cloud to filter your data before it sends the data to Red Hat, select I wish to manually customize the data set sent to cost management . Click . Additional resources For additional information about creating projects, see the Google Cloud documentation Creating and managing projects . 2.3. Creating a Google Cloud bucket Create a bucket for filtered reports that you will create later. Buckets are containers that store data. In the Google Cloud Console : Go to Cloud Storage Buckets . Click Create . Enter your bucket information. Name your bucket. In this example, use customer-data . Click Create , then click Confirm in the confirmation dialog. In cost management: On the Create cloud storage bucket page, enter your Cloud storage bucket name . Additional resources For additional information about creating buckets, see the Google Cloud documentation on Creating buckets . 2.4. Creating a Google Cloud Identity and Access Management role A custom Identity and Access Management (IAM) role for cost management gives access to specific cost related resources required to enable a Google Cloud Platform integration and prohibits access to other resources. Prerequisites Access to Google Cloud Console with these permissions: resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Google Cloud project Procedure In the Google Cloud Console , click IAM & Admin Roles . Select the project you created from the menu. Click + Create role . Enter a Title , Description and ID for the role. In this example, use customer-data-role . Click + ADD PERMISSIONS . Use the Enter property name or value field to search and select the following permissions for your custom role: storage.objects.get storage.objects.list storage.buckets.get Click ADD . Click CREATE . In the Add a cloud integration wizard, on the Create IAM role page, click . Additional resources For additional information about roles and their usage, see the Google Cloud documentation Understanding roles and Creating and managing custom roles . 2.5. Adding a billing service account member to your Google Cloud project You must create a billing service account member that can export cost reports to Red Hat Hybrid Cloud Console in your project. Prerequisites You must have access to Google Cloud Console and have the following permissions: resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Google Cloud project A cost management Identity and Access Management (IAM) role In the Google Cloud Console : Click IAM & Admin IAM . Select the project you created from the menu. Click Grant Access . Paste the following principal into the New principals field: In the Assign roles section, assign the IAM role you created in Creating a Google Cloud Identity and Access Management role . In this example, use customer-data-role . Click SAVE . In the cost management: On the Assign access page, click . Verification steps Navigate to IAM & Admin IAM . Verify the new member is present with the correct role. Additional resources For additional information about roles and their usage, see the Google Cloud documentation Understanding roles and Creating and managing custom roles . 2.6. Creating a Google Cloud BigQuery dataset Create a BigQuery dataset to collect and store the billing data for cost management. Prerequisites Access to Google Cloud Console with bigquery.datasets.create permission Google Cloud project Procedure In Google Cloud Console , click BigQuery . In the Explorer panel, select the project you created. Click the action icon for your project name. Click CREATE DATASET . Enter a name for your dataset in the Dataset ID field. In this example, use CustomerFilteredData . Click CREATE DATASET . In the Add a cloud integration wizard, on the Create dataset page, enter the name of the dataset you created. Click . 2.7. Exporting Google Cloud billing data to BigQuery Enabling a billing export to BigQuery sends your Google Cloud billing data (such as usage, cost estimates, and pricing data) automatically to the BigQuery dataset you created in the last step. Prerequisites Access to Google Cloud Console with the Billing Account Administrator role Google Cloud project Billing service member with the cost management Identity and Access Management (IAM) role BigQuery dataset Procedure In the Google Cloud Console , click Billing Billing export . Click the Billing export tab. Click EDIT SETTINGS in the Detailed usage cost section. Select the cost management Project and Billing export dataset you created in the dropdown menus. Click SAVE . In the Add a cloud integration wizard, on the Billing export page, click . On the Review details page, review the information about your integration and click Add . Copy your source_uuid so that you can use it in the cloud function. Verification steps Verify a checkmark with Enabled in the Detailed usage cost section, with correct Project name and Dataset name . 2.8. Creating a function to post filtered data to your storage bucket Create a function that filters your data and adds it to the storage account that you created to share with Red Hat. You can use the example Python script to gather the cost data from your cost exports related to your Red Hat expenses and add it to the storage account. This script filters the cost data you created with BigQuery, removes non-Red Hat information, then creates .csv files, stores them in the bucket you created, and sends the data to Red Hat. Prerequisites You must have a Red Hat Hybrid Cloud Console service account . You must have enabled the API service in GCP. In the Google Cloud Console : Click Security Secret manager to set up a secret to authenticate your function with Red Hat without storing your credentials in your function. Enable the Secret Manager if it is not already enabled. From Secret Manager , click Create secret . Name your secret, add your service account Client ID, and click Create Secret . Repeat this process to save a secret for your service account Client secret. In the Google Cloud Console search bar, search for functions and select the Cloud Functions result. On the Cloud Functions page, click Create function . Name the function. In this example, use customer-data-function . In the Trigger section, select HTTPS as the trigger type. In Runtime, build, connections and security settings , click the Security and image repo tab. Click Add a secret reference . Select the client_id secret you created before. Set the reference method to Exposed as environment variable . Name the exposed environment variable client_id . Click Done . Repeat the steps for your client_secret . Click . On the Cloud Functions Code page, set the runtime to the latest Python version available. Open the requirements.txt file. Paste the following lines at the end of the file. Set the Entry Point to get_filtered_data . Open the main.py file. Paste the following python script . Change the values in the section marked # Required vars to update to the values for your environment. Update the values for the following lines: INTEGRATION_ID Cost management integration_id BUCKET Filtered data GCP Bucket PROJECT_ID Your project ID DATASET Your dataset name TABLE_ID Your table ID Click Deploy . 2.9. Trigger your function to post filtered data to your storage bucket Create a scheduler job to run the function you created to send filtered data to Red Hat on a schedule. Procedure Copy the Trigger URL for the function you created to post the cost reports. You will need to add it to the Google Cloud Scheduler. In the Google Cloud Console , search for functions and select the Cloud Functions result. On the Cloud Functions page, select your function, and click the Trigger tab. In the HTTP section, click Copy to clipboard . Create the scheduler job. In the Google Cloud Console , search for cloud scheduler and select the Cloud Scheduler result. Click Create job . Name your scheduler job. In this example, use CustomerFilteredDataSchedule . In the Frequency field, set the cron expression for when you want the function to run. In this example, use 0 9 * * * to run the function daily at 9 AM. Set the time zone and click Continue . Configure the execution on the page. In the Target type field, select HTTP . In the URL field, paste the Trigger URL you copied. In the body field, paste the following code that passes into the function to trigger it. {"name": "Scheduler"} In the Auth header field, select Add OIDC token . Click the Service account field and click Create to create a service account and role for the scheduler job. In the Service account details step, name your service account. In this example, use scheduler-service-account . Accept the default Service account ID and click Create and Continue . In the Grant this service account access to project field, search for and select Cloud Scheduler Job Runner as the first role. Click ADD ANOTHER ROLE , then search for and select Cloud Functions Invoker . Click Continue . Click Done to finish creating the service account. Go back to the Cloud scheduler tab. In the Configure the execution page, select the Service account field. Refresh the page and select the scheduler you just created. Click Continue and then click Create . After completing these steps, you have successfully set up your Google Cloud function to send reports to Red Hat. For steps, refer to Chapter 3, steps for managing your costs . 2.10. Creating additional cloud functions to collect finalized data At the beginning of the month, Google Cloud finalizes the bill for the month before. Create an additional function and scheduled job to trigger it to send these reports to Red Hat so cost management can process them. Procedure Set up a function to post reports: From Cloud Functions , select Create function . Name your function. Select HTTP trigger . In Runtime, build, connections, security settings , click Security . Click Reference secret . Select Exposed as environment variable . Select Secret version or Latest . Click Done . Repeat the process for your other secrets. Click Save . Copy your Trigger URL . Click . Select the latest Python runtime. Set Entry point to get_filtered_data . Add your Google Cloud function . Update the values for INTEGRATION_ID , BUCKET , PROJECT_ID , DATASET , and TABLE_ID . Remove the comments from the following lines: # month_end = now.replace(day=1) - timedelta(days=1) # delta = now.replace(day=1) - timedelta(days=query_range) # year = month_end.strftime("%Y") # month = month_end.strftime("%m") # day = month_end.strftime("%d") Select the requirements.py file and add the requirements from the requirements.txt file. Click Deploy . Set up a cloud scheduler to trigger your function: Go to Cloud Scheduler . Click Schedule a job . Name your schedule Set the frequency. For example, the following cron will run the job on the fourth day of every month, 0 9 4 * * Set a Time zone . Click Continue . Paste the function Trigger URL you copied earlier. In the request body, add {"name": "Scheduler"} . Set the auth header to OIDC token . Select or create a service account with the Cloud Scheudler Job Runner and Cloud Functions Invoker roles. Click Continue . Click Save .
|
[
"[email protected]",
"requests google-cloud-bigquery google-cloud-storage",
"{\"name\": \"Scheduler\"}",
"month_end = now.replace(day=1) - timedelta(days=1) # delta = now.replace(day=1) - timedelta(days=query_range) # year = month_end.strftime(\"%Y\") # month = month_end.strftime(\"%m\") # day = month_end.strftime(\"%d\")"
] |
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_google_cloud_data_into_cost_management/assembly-adding-filtered-gcp-int
|
B.21. firefox
|
B.21. firefox B.21.1. RHSA-2010:0861 - Critical: firefox security update Updated firefox packages that fix several security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Firefox is an open source web browser. XULRunner provides the XUL Runtime environment for Mozilla Firefox. CVE-2010-3765 A race condition flaw was found in the way Firefox handled Document Object Model (DOM) element properties. Malicious HTML content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. CVE-2010-3175 , CVE-2010-3176 , CVE-2010-3179 , CVE-2010-3183 , CVE-2010-3180 Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. CVE-2010-3177 A flaw was found in the way the Gopher parser in Firefox converted text into HTML. A malformed file name on a Gopher server could, when accessed by a victim running Firefox, allow arbitrary JavaScript to be executed in the context of the Gopher domain. CVE-2010-3178 A same-origin policy bypass flaw was found in Firefox. An attacker could create a malicious web page that, when viewed by a victim, could steal private data from a different website the victim had loaded with Firefox. CVE-2010-3182 A flaw was found in the script that launches Firefox. The LD_LIBRARY_PATH variable was appending a "." character, which could allow a local attacker to execute arbitrary code with the privileges of a different user running Firefox, if that user ran Firefox from within an attacker-controlled directory. For technical details regarding these flaws, refer to the Mozilla security advisories for Firefox 3.6.11 and 3.6.12: http://www.mozilla.org/security/known-vulnerabilities/firefox36.html#firefox3.6.11 http://www.mozilla.org/security/known-vulnerabilities/firefox36.html#firefox3.6.12 All Firefox users should upgrade to these updated packages, which contain Firefox version 3.6.12, which corrects these issues. After installing the update, Firefox must be restarted for the changes to take effect.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/firefox
|
Chapter 2. Customizing sample pipelines
|
Chapter 2. Customizing sample pipelines Learn how to update Pipeline as Code ( pac ) URLs within the sample templates repository and customize the sample pipelines repository to match your workflow. By customizing pac URLs, organizations can integrate custom pipelines tailored to their CI/CD requirements. Prerequisites Before making changes, ensure that: You have already forked and cloned the following repositories: Sample pipelines Sample templates Your forked repositories are up to date and synced with the upstream repository. Customizing the sample templates repository to update pac URLs* Procedure Access forked sample pipelines repository URL: Open your forked sample pipelines repository. Copy the complete URL from the address bar. For example, https://github.com/<username>/tssc-sample-pipelines . Update pac URLs in the sample templates repository: Navigate to your local cloned sample templates repository using your terminal. Run the following command, replacing {fork_url} with the copied URL from step 1 and {branch_name} with your desired branch name (for example, main): ./scripts/update-tekton-definition {fork_url} {branch_name} # For example, .scripts/update-tekton-definition https://github.com/<username>/tssc-sample-pipelines main Review, commit, and push changes: Review the updated files within your sample templates repository. Commit the changes with appropriate message. Push the committed changes to your forked repository. Customizing the sample pipelines repository to your workflow The sample pipelines repository provides a foundation upon which you can build your organization's specific CI/CD workflows. The sample pipelines repository includes several key pipeline templates in the pac directory: gitops-repo : This directory holds the pipeline definitions for validating pull requests within your GitOps repository. It triggers the gitops-pull-request pipeline, located in the pipelines directory, validating that image updates comply with organizational standards. This setup is crucial for promotion workflows, where an application's deployment state is advanced sequentially through environments, such as from development to staging or from staging to production. For more information about pipeline definitions in gitops-repo , refer Gitops Pipelines . pipelines : This directory houses the implementations of build and validation pipelines that are referenced by the event handlers in both the gitops-repo and source-repo . By examining the contents of this directory, you can understand the specific actions performed by the pipelines, including how they contribute to the secure promotion and deployment of applications. source-repo : This directory focuses on Dockerfile-based secure supply chain software builds. It includes pipeline definitions for cloning the source, generating and signing artifacts (such as .sig for image signature, .att for attestation, and .sbom for Software Bill of Materials), and pushing these to the user's image registry. For more information about pipeline definitions in source-repo , refer Shared Git resolver model for shared pipeline and tasks . tasks : This directory houses a collection of tasks that can be added or modified, aligning with organizational needs. For example, Advanced Cluster Security (ACS) tasks can be substituted with alternative checks, or entirely new tasks can be integrated into the pipeline to enhance its functionality and compliance. Verification Consider creating an application to explore the impact of your template and pipeline customization. Additional resources To customize templates, see Customizing sample software templates For information on Pipeline as code, refer About Pipelines as Code .
|
[
"./scripts/update-tekton-definition {fork_url} {branch_name} For example, .scripts/update-tekton-definition https://github.com/<username>/tssc-sample-pipelines main"
] |
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/customizing_red_hat_trusted_application_pipeline/customizing-sample-pipelines_default
|
3.6. Configuring IP Networking with ip Commands
|
3.6. Configuring IP Networking with ip Commands As a system administrator, you can configure a network interface using the ip command, but but changes are not persistent across reboots; when you reboot, you will lose any changes. The commands for the ip utility, sometimes referred to as iproute2 after the upstream package name, are documented in the man ip(8) page. The package name in Red Hat Enterprise Linux 7 is iproute . If necessary, you can check that the ip utility is installed by checking its version number as follows: The ip commands can be used to add and remove addresses and routes to interfaces in parallel with NetworkManager , which will preserve them and recognize them in nmcli , nmtui , control-center , and the D-Bus API. To bring an interface down: Note The ip link set ifname command sets a network interface in IFF_UP state and enables it from the kernel's scope. This is different from the ifup ifname command for initscripts or NetworkManager 's activation state of a device. In fact, NetworkManager always sets an interface up even if it is currently disconnected. Disconnecting the device through the nmcli tool, does not remove the IFF_UP flag. In this way, NetworkManager gets notifications about the carrier state. Note that the ip utility replaces the ifconfig utility because the net-tools package (which provides ifconfig ) does not support InfiniBand addresses. For information about available OBJECTs, use the ip help command. For example: ip link help and ip addr help . Note ip commands given on the command line will not persist after a system restart. Where persistence is required, make use of configuration files ( ifcfg files) or add the commands to a script. Examples of using the command line and configuration files for each task are included after nmtui and nmcli examples but before explaining the use of one of the graphical user interfaces to NetworkManager , namely, control-center and nm-connection-editor . The ip utility can be used to assign IP addresses to an interface with the following form: ip addr [ add | del ] address dev ifname Assigning a Static Address Using ip Commands To assign an IP address to an interface: Further examples and command options can be found in the ip-address(8) manual page. Configuring Multiple Addresses Using ip Commands As the ip utility supports assigning multiple addresses to the same interface it is no longer necessary to use the alias interface method of binding multiple addresses to the same interface. The ip command to assign an address can be repeated multiple times in order to assign multiple address. For example: For more details on the commands for the ip utility, see the ip(8) manual page. Note ip commands given on the command line will not persist after a system restart.
|
[
"~]USD ip -V ip utility, iproute2-ss130716",
"ip link set ifname down",
"~]# ip address add 10.0.0.3/24 dev enp1s0 You can view the address assignment of a specific device: ~]# ip addr show dev enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether f0:de:f1:7b:6e:5f brd ff:ff:ff:ff:ff:ff inet 10.0.0.3/24 brd 10.0.0.255 scope global global enp1s0 valid_lft 58682sec preferred_lft 58682sec inet6 fe80::f2de:f1ff:fe7b:6e5f/64 scope link valid_lft forever preferred_lft forever",
"~]# ip address add 192.168.2.223/24 dev enp1s0 ~]# ip address add 192.168.4.223/24 dev enp1s0 ~]# ip addr 3: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:fb:77:9e brd ff:ff:ff:ff:ff:ff inet 192.168. 2 .223/24 scope global enp1s0 inet 192.168. 4 .223/24 scope global enp1s0"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configuring_IP_Networking_with_ip_Commands
|
10.5.47. DefaultIcon
|
10.5.47. DefaultIcon DefaultIcon specifies the icon displayed in server generated directory listings for files which have no other icon specified. The unknown.gif image file is the default.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-defaulticon
|
Chapter 2. Configuring user authentication using authselect
|
Chapter 2. Configuring user authentication using authselect authselect is a utility that allows you to configure system identity and authentication sources by selecting a specific profile. Profile is a set of files that describes how the resulting Pluggable Authentication Modules (PAM) and Network Security Services (NSS) configuration will look like. You can choose the default profile set or create a custom profile. 2.1. What is authselect used for You can use the authselect utility to configure user authentication on a Red Hat Enterprise Linux 8 host. You can configure identity information and authentication sources and providers by selecting one of the ready-made profiles: The default sssd profile enables the System Security Services Daemon (SSSD) for systems that use LDAP authentication. The winbind profile enables the Winbind utility for systems directly integrated with Microsoft Active Directory. The nis profile ensures compatibility with legacy Network Information Service (NIS) systems. The minimal profile serves only local users and groups directly from system files, which allows administrators to remove network authentication services that are no longer needed. After selecting an authselect profile for a given host, the profile is applied to every user logging into the host. Red Hat recommends using authselect in semi-centralized identity management environments, for example if your organization utilizes LDAP, Winbind, or NIS databases to authenticate users to use services in your domain. Warning You do not need to use authselect if: Your host is part of Red Hat Enterprise Linux Identity Management (IdM). Joining your host to an IdM domain with the ipa-client-install command automatically configures SSSD authentication on your host. Your host is part of Active Directory via SSSD. Calling the realm join command to join your host to an Active Directory domain automatically configures SSSD authentication on your host. Red Hat recommends against changing the authselect profiles configured by ipa-client-install or realm join . If you need to modify them, display the current settings before making any modifications, so you can revert back to them if necessary: 2.1.1. Files and directories authselect modifies The authconfig utility, used in Red Hat Enterprise Linux versions, created and modified many different configuration files, making troubleshooting more difficult. Authselect simplifies testing and troubleshooting because it only modifies the following files and directories: /etc/nsswitch.conf The GNU C Library and other applications use this Name Service Switch (NSS) configuration file to determine the sources from which to obtain name-service information in a range of categories, and in what order. Each category of information is identified by a database name. /etc/pam.d/* files Linux-PAM (Pluggable Authentication Modules) is a system of modules that handle the authentication tasks of applications (services) on the system. The nature of the authentication is dynamically configurable: the system administrator can choose how individual service-providing applications will authenticate users. The configuration files in the /etc/pam.d/ directory list the PAMs that will perform authentication tasks required by a service, and the appropriate behavior of the PAM-API in the event that individual PAMs fail. Among other things, these files contain information about: User password lockout conditions The ability to authenticate with a smart card The ability to authenticate with a fingerprint reader /etc/dconf/db/distro.d/* files This directory holds configuration profiles for the dconf utility, which you can use to manage settings for the GNOME Desktop Graphical User Interface (GUI). 2.1.2. Data providers in /etc/nsswitch.conf The default sssd profile establishes SSSD as a source of information by creating sss entries in /etc/nsswitch.conf : This means that the system first looks to SSSD if information concerning one of those items is requested: passwd for user information group for user group information netgroup for NIS netgroup information automount for NFS automount information services for information regarding services Only if the requested information is not found in the sssd cache and on the server providing authentication, or if sssd is not running, the system looks at the local files, that is /etc/* . For example, if information is requested about a user ID, the user ID is first searched in the sssd cache. If it is not found there, the /etc/passwd file is consulted. Analogically, if a user's group affiliation is requested, it is first searched in the sssd cache and only if not found there, the /etc/group file is consulted. In practice, the local files database is not normally consulted. The most important exception is the case of the root user, which is never handled by sssd but by files . 2.2. Choosing an authselect profile As a system administrator, you can select a profile for the authselect utility for a specific host. The profile will be applied to every user logging into the host. Prerequisites You need root credentials to run authselect commands Procedure Select the authselect profile that is appropriate for your authentication provider. For example, for logging into the network of a company that uses LDAP, choose sssd . Optional: You can modify the default profile settings by adding the following options to the authselect select sssd or authselect select winbind command, for example: with-faillock with-smartcard with-fingerprint To see the full list of available options, see Converting your scripts from authconfig to authselect or the authselect-migration(7) man page on your system. Note Make sure that the configuration files that are relevant for your profile are configured properly before finishing the authselect select procedure. For example, if the sssd daemon is not configured correctly and active, running authselect select results in only local users being able to authenticate, using pam_unix . Verification Verify sss entries for SSSD are present in /etc/nsswitch.conf : Review the contents of the /etc/pam.d/system-auth file for pam_sss.so entries: Additional Resources What is authselect used for Modifying a ready-made authselect profile Creating and deploying your own authselect profile 2.3. Modifying a ready-made authselect profile As a system administrator, you can modify one of the default profiles to suit your needs. You can modify any of the items in the /etc/authselect/user-nsswitch.conf file with the exception of: passwd group netgroup automount services Running authselect select profile_name afterwards will result in transferring permissible changes from /etc/authselect/user-nsswitch.conf to the /etc/nsswitch.conf file. Unacceptable changes are overwritten by the default profile configuration. Important Do not modify the /etc/nsswitch.conf file directly. Procedure Select an authselect profile, for example: Edit the /etc/authselect/user-nsswitch.conf file with your desired changes. Apply the changes from the /etc/authselect/user-nsswitch.conf file: Verification Review the /etc/nsswitch.conf file to verify that the changes from /etc/authselect/user-nsswitch.conf have been propagated there. Additional Resources What is authselect used for 2.4. Creating and deploying your own authselect profile As a system administrator, you can create and deploy a custom profile by making a customized copy of one of the default profiles. This is particularly useful if Modifying a ready-made authselect profile is not enough for your needs. When you deploy a custom profile, the profile is applied to every user logging into the given host. Procedure Create your custom profile by using the authselect create-profile command. For example, to create a custom profile called user-profile based on the ready-made sssd profile but one in which you can configure the items in the /etc/nsswitch.conf file yourself: Warning If you are planning to modify /etc/authselect/custom/user-profile/{password-auth,system-auth,fingerprint-auth,smartcard-auth,postlogin} , then enter the command above without the --symlink-pam option. This is to ensure that the modification persists during the upgrade of authselect-libs . Including the --symlink-pam option in the command means that PAM templates will be symbolic links to the origin profile files instead of their copy; including the --symlink-meta option means that meta files, such as README and REQUIREMENTS will be symbolic links to the origin profile files instead of their copy. This ensures that all future updates to the PAM templates and meta files in the original profile will be reflected in your custom profile, too. The command creates a copy of the /etc/nsswitch.conf file in the /etc/authselect/custom/user-profile/ directory. Configure the /etc/authselect/custom/user-profile/nsswitch.conf file. Select the custom profile by running the authselect select command, and adding custom/ name_of_the_profile as a parameter. For example, to select the user-profile profile: Selecting the user-profile profile for your machine means that if the sssd profile is subsequently updated by Red Hat, you will benefit from all the updates with the exception of updates made to the /etc/nsswitch.conf file. Example 2.1. Creating a profile The following procedure shows how to create a profile based on the sssd profile which only consults the local static table lookup for hostnames in the /etc/hosts file, not in the dns or myhostname databases. Edit the /etc/nsswitch.conf file by editing the following line: Create a custom profile based on sssd that excludes changes to /etc/nsswitch.conf : Select the profile: Optional: Check that selecting the custom profile has created the /etc/pam.d/system-auth file according to the chosen sssd profile left the configuration in the /etc/nsswitch.conf unchanged: Note Running authselect select sssd would, in contrast, result in hosts: files dns myhostname Additional Resources What is authselect used for 2.5. Converting your scripts from authconfig to authselect If you use ipa-client-install or realm join to join a domain, you can safely remove any authconfig call in your scripts. If this is not possible, replace each authconfig call with its equivalent authselect call. In doing that, select the correct profile and the appropriate options. In addition, edit the necessary configuration files: /etc/krb5.conf /etc/sssd/sssd.conf (for the sssd profile) or /etc/samba/smb.conf (for the winbind profile) Relation of authconfig options to authselect profiles and Authselect profile option equivalents of authconfig options show the authselect equivalents of authconfig options. Table 2.1. Relation of authconfig options to authselect profiles Authconfig options Authselect profile --enableldap --enableldapauth sssd --enablesssd --enablesssdauth sssd --enablekrb5 sssd --enablewinbind --enablewinbindauth winbind --enablenis nis Table 2.2. Authselect profile option equivalents of authconfig options Authconfig option Authselect profile feature --enablesmartcard with-smartcard --enablefingerprint with-fingerprint --enableecryptfs with-ecryptfs --enablemkhomedir with-mkhomedir --enablefaillock with-faillock --enablepamaccess with-pamaccess --enablewinbindkrb5 with-krb5 Examples of authselect command equivalents to authconfig commands shows example transformations of Kickstart calls to authconfig into Kickstart calls to authselect . Table 2.3. Examples of authselect command equivalents to authconfig commands authconfig command authselect equivalent authconfig --enableldap --enableldapauth --enablefaillock --updateall authselect select sssd with-faillock authconfig --enablesssd --enablesssdauth --enablesmartcard --smartcardmodule=sssd --updateall authselect select sssd with-smartcard authconfig --enableecryptfs --enablepamaccess --updateall authselect select sssd with-ecryptfs with-pamaccess authconfig --enablewinbind --enablewinbindauth --winbindjoin=Administrator --updateall realm join -U Administrator --client-software=winbind WINBINDDOMAIN 2.6. Additional resources What is pam_faillock and how to use it in Red Hat Enterprise Linux 8 & 9? Set Password Policy/Complexity in Red Hat Enterprise Linux 8
|
[
"authselect current Profile ID: sssd Enabled features: - with-sudo - with-mkhomedir - with-smartcard",
"passwd: sss files group: sss files netgroup: sss files automount: sss files services: sss files",
"authselect select sssd",
"passwd: sss files group: sss files netgroup: sss files automount: sss files services: sss files",
"Generated by authselect on Tue Sep 11 22:59:06 2018 Do not modify this file manually. auth required pam_env.so auth required pam_faildelay.so delay=2000000 auth [default=1 ignore=ignore success=ok] pam_succeed_if.so uid >= 1000 quiet auth [default=1 ignore=ignore success=ok] pam_localuser.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 1000 quiet_success auth sufficient pam_sss.so forward_pass auth required pam_deny.so account required pam_unix.so account sufficient pam_localuser.so",
"authselect select sssd",
"authselect apply-changes",
"authselect create-profile user-profile -b sssd --symlink-meta --symlink-pam New profile was created at /etc/authselect/custom/user-profile",
"authselect select custom/ user-profile",
"hosts: files",
"authselect create-profile user-profile -b sssd --symlink-meta --symlink-pam",
"authselect select custom/ user-profile",
"hosts: files"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_authentication_and_authorization_in_rhel/configuring-user-authentication-using-authselect_configuring-authentication-and-authorization-in-rhel
|
function::register
|
function::register Name function::register - Return the signed value of the named CPU register Synopsis Arguments name Name of the register to return Description Return the value of the named CPU register, as it was saved when the current probe point was hit. If the register is 32 bits, it is sign-extended to 64 bits. For the i386 architecture, the following names are recognized. (name1/name2 indicates that name1 and name2 are alternative names for the same register.) eax/ax, ebp/bp, ebx/bx, ecx/cx, edi/di, edx/dx, eflags/flags, eip/ip, esi/si, esp/sp, orig_eax/orig_ax, xcs/cs, xds/ds, xes/es, xfs/fs, xss/ss. For the x86_64 architecture, the following names are recognized: 64-bit registers: r8, r9, r10, r11, r12, r13, r14, r15, rax/ax, rbp/bp, rbx/bx, rcx/cx, rdi/di, rdx/dx, rip/ip, rsi/si, rsp/sp; 32-bit registers: eax, ebp, ebx, ecx, edx, edi, edx, eip, esi, esp, flags/eflags, orig_eax; segment registers: xcs/cs, xss/ss. For powerpc, the following names are recognized: r0, r1, ... r31, nip, msr, orig_gpr3, ctr, link, xer, ccr, softe, trap, dar, dsisr, result. For s390x, the following names are recognized: r0, r1, ... r15, args, psw.mask, psw.addr, orig_gpr2, ilc, trap. For AArch64, the following names are recognized: x0, x1, ... x30, fp, lr, sp, pc, and orig_x0.
|
[
"register:long(name:string)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-register
|
Chapter 4. Installing
|
Chapter 4. Installing 4.1. Preparing your cluster for OpenShift Virtualization Review this section before you install OpenShift Virtualization to ensure that your cluster meets the requirements. Important Installation method considerations You can use any installation method, including user-provisioned, installer-provisioned, or assisted installer, to deploy OpenShift Container Platform. However, the installation method and the cluster topology might affect OpenShift Virtualization functionality, such as snapshots or live migration . Red Hat OpenShift Data Foundation If you deploy OpenShift Virtualization with Red Hat OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details. IPv6 You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. FIPS mode If you install your cluster in FIPS mode , no additional setup is required for OpenShift Virtualization. 4.1.1. Supported platforms You can use the following platforms with OpenShift Virtualization: On-premise bare metal servers. See Planning a bare metal cluster for OpenShift Virtualization . Amazon Web Services bare metal instances. See Installing a cluster on AWS with customizations . IBM Cloud(R) Bare Metal Servers. See Deploy OpenShift Virtualization on IBM Cloud(R) Bare Metal nodes . Important Installing OpenShift Virtualization on IBM Cloud(R) Bare Metal Servers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Bare metal instances or servers offered by other cloud providers are not supported. 4.1.1.1. OpenShift Virtualization on AWS bare metal You can run OpenShift Virtualization on an Amazon Web Services (AWS) bare-metal OpenShift Container Platform cluster. Note OpenShift Virtualization is also supported on Red Hat OpenShift Service on AWS (ROSA) Classic clusters, which have the same configuration requirements as AWS bare-metal clusters. Before you set up your cluster, review the following summary of supported features and limitations: Installing You can install the cluster by using installer-provisioned infrastructure, ensuring that you specify bare-metal instance types for the worker nodes. For example, you can use the c5n.metal type value for a machine based on x86_64 architecture. You specify bare-metal instance types by editing the install-config.yaml file. For more information, see the OpenShift Container Platform documentation about installing on AWS. Accessing virtual machines (VMs) There is no change to how you access VMs by using the virtctl CLI tool or the OpenShift Container Platform web console. You can expose VMs by using a NodePort or LoadBalancer service. The load balancer approach is preferable because OpenShift Container Platform automatically creates the load balancer in AWS and manages its lifecycle. A security group is also created for the load balancer, and you can use annotations to attach existing security groups. When you remove the service, OpenShift Container Platform removes the load balancer and its associated resources. Networking You cannot use Single Root I/O Virtualization (SR-IOV) or bridge Container Network Interface (CNI) networks, including virtual LAN (VLAN). If your application requires a flat layer 2 network or control over the IP pool, consider using OVN-Kubernetes secondary overlay networks. Storage You can use any storage solution that is certified by the storage vendor to work with the underlying platform. Important AWS bare-metal and ROSA clusters might have different supported storage solutions. Ensure that you confirm support with your storage vendor. Using Amazon Elastic File System (EFS) or Amazon Elastic Block Store (EBS) with OpenShift Virtualization might cause performance and functionality limitations as shown in the following table: Table 4.1. EFS and EBS performance and functionality limitations Feature EBS volume EFS volume Shared storage solutions gp2 gp3 io2 VM live migration Not available Not available Available Available Available Fast VM creation by using cloning Available Not available Available VM backup and restore by using snapshots Available Not available Available Consider using CSI storage, which supports ReadWriteMany (RWX), cloning, and snapshots to enable live migration, fast VM creation, and VM snapshots capabilities. Hosted control planes (HCPs) HCPs for OpenShift Virtualization are not currently supported on AWS infrastructure. Additional resources Connecting a virtual machine to an OVN-Kubernetes secondary network Exposing a virtual machine by using a service 4.1.2. Hardware and operating system requirements Review the following hardware and operating system requirements for OpenShift Virtualization. 4.1.2.1. CPU requirements Supported by Red Hat Enterprise Linux (RHEL) 9. See Red Hat Ecosystem Catalog for supported CPUs. Note If your worker nodes have different CPUs, live migration failures might occur because different CPUs have different capabilities. You can mitigate this issue by ensuring that your worker nodes have CPUs with the appropriate capacity and by configuring node affinity rules for your virtual machines. See Configuring a required node affinity rule for details. Support for AMD and Intel 64-bit architectures (x86-64-v2). Support for Intel 64 or AMD64 CPU extensions. Intel VT or AMD-V hardware virtualization extensions enabled. NX (no execute) flag enabled. 4.1.2.2. Operating system requirements Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes. See About RHCOS for details. Note RHEL worker nodes are not supported. 4.1.2.3. Storage requirements Supported by OpenShift Container Platform. See Optimizing storage . You must create a default OpenShift Virtualization or OpenShift Container Platform storage class. The purpose of this is to address the unique storage needs of VM workloads and offer optimized performance, reliability, and user experience. If both OpenShift Virtualization and OpenShift Container Platform default storage classes exist, the OpenShift Virtualization class takes precedence when creating VM disks. Note To mark a storage class as the default for virtualization workloads, set the annotation storageclass.kubevirt.io/is-default-virt-class to "true" . If the storage provisioner supports snapshots, you must associate a VolumeSnapshotClass object with the default storage class. 4.1.2.3.1. About volume and access modes for virtual machine disks If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: ReadWriteMany (RWX) access mode is required for live migration. The Block volume mode performs significantly better than the Filesystem volume mode. This is because the Filesystem volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage. For example, if you use Red Hat OpenShift Data Foundation, Ceph RBD volumes are preferable to CephFS volumes. Important You cannot live migrate virtual machines with the following configurations: Storage volume with ReadWriteOnce (RWO) access mode Passthrough features such as GPUs Set the evictionStrategy field to None for these virtual machines. The None strategy powers down VMs during node reboots. 4.1.3. Live migration requirements Shared storage with ReadWriteMany (RWX) access mode. Sufficient RAM and network bandwidth. Note You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation: The default number of migrations that can run in parallel in the cluster is 5. If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU. A dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration. 4.1.4. Physical resource overhead requirements OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance. Important The numbers noted in this documentation are based on Red Hat's test methodology and setup. These numbers can vary based on your own individual setup and environments. Memory overhead Calculate the memory overhead values for OpenShift Virtualization by using the equations below. Cluster memory overhead Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes. Virtual machine memory overhead 1 Required for the processes that run in the virt-launcher pod. 2 Number of virtual CPUs requested by the virtual machine. 3 Number of virtual graphics cards requested by the virtual machine. 4 Additional memory overhead: If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device. If Secure Encrypted Virtualization (SEV) is enabled, add 256 MiB. If Trusted Platform Module (TPM) is enabled, add 53 MiB. CPU overhead Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup. Cluster CPU overhead OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes. Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads. Virtual machine CPU overhead If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires. Storage overhead Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment. Cluster storage overhead 10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization. Virtual machine storage overhead Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself. Example As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores. 4.1.5. Single-node OpenShift differences You can install OpenShift Virtualization on single-node OpenShift. However, you should be aware that Single-node OpenShift does not support the following features: High availability Pod disruption Live migration Virtual machines or templates that have an eviction strategy configured Additional resources Glossary of common terms for OpenShift Container Platform storage 4.1.6. Object maximums You must consider the following tested object maximums when planning your cluster: OpenShift Container Platform object maximums . OpenShift Virtualization object maximums . 4.1.7. Cluster high-availability options You can configure one of the following high-availability (HA) options for your cluster: Automatic high availability for installer-provisioned infrastructure (IPI) is available by deploying machine health checks . Note In OpenShift Container Platform clusters installed using installer-provisioned infrastructure and with a properly configured MachineHealthCheck resource, if a node fails the machine health check and becomes unavailable to the cluster, it is recycled. What happens with VMs that ran on the failed node depends on a series of conditions. See Run strategies for more detailed information about the potential outcomes and how run strategies affect those outcomes. Automatic high availability for both IPI and non-IPI is available by using the Node Health Check Operator on the OpenShift Container Platform cluster to deploy the NodeHealthCheck controller. The controller identifies unhealthy nodes and uses a remediation provider, such as the Self Node Remediation Operator or Fence Agents Remediation Operator, to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run oc delete node <lost_node> . Note Without an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability. 4.2. Installing OpenShift Virtualization Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster. Important If you install OpenShift Virtualization in a restricted environment with no internet connectivity, you must configure Operator Lifecycle Manager for disconnected environments . If you have limited internet connectivity, you can configure proxy support in OLM to access the OperatorHub. 4.2.1. Installing the OpenShift Virtualization Operator Install the OpenShift Virtualization Operator by using the OpenShift Container Platform web console or the command line. 4.2.1.1. Installing the OpenShift Virtualization Operator by using the web console You can deploy the OpenShift Virtualization Operator by using the OpenShift Container Platform web console. Prerequisites Install OpenShift Container Platform 4.17 on your cluster. Log in to the OpenShift Container Platform web console as a user with cluster-admin permissions. Procedure From the Administrator perspective, click Operators OperatorHub . In the Filter by keyword field, type Virtualization . Select the OpenShift Virtualization Operator tile with the Red Hat source label. Read the information about the Operator and click Install . On the Install Operator page: Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version. For Installed Namespace , ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-cnv namespace, which is automatically created if it does not exist. Warning Attempting to install the OpenShift Virtualization Operator in a namespace other than openshift-cnv causes the installation to fail. For Approval Strategy , it is highly recommended that you select Automatic , which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel. While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic . Warning Because OpenShift Virtualization is only supported when used with the corresponding OpenShift Container Platform version, missing OpenShift Virtualization updates can cause your cluster to become unsupported. Click Install to make the Operator available to the openshift-cnv namespace. When the Operator installs successfully, click Create HyperConverged . Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components. Click Create to launch OpenShift Virtualization. Verification Navigate to the Workloads Pods page and monitor the OpenShift Virtualization pods until they are all Running . After all the pods display the Running state, you can use OpenShift Virtualization. 4.2.1.2. Installing the OpenShift Virtualization Operator by using the command line Subscribe to the OpenShift Virtualization catalog and install the OpenShift Virtualization Operator by applying manifests to your cluster. 4.2.1.2.1. Subscribing to the OpenShift Virtualization catalog by using the CLI Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv namespace access to the OpenShift Virtualization Operators. To subscribe, configure Namespace , OperatorGroup , and Subscription objects by applying a single manifest to your cluster. Prerequisites Install OpenShift Container Platform 4.17 on your cluster. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a YAML file that contains the following manifest: apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.17.5 channel: "stable" 1 1 Using the stable channel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version. Create the required Namespace , OperatorGroup , and Subscription objects for OpenShift Virtualization by running the following command: USD oc apply -f <file name>.yaml Note You can configure certificate rotation parameters in the YAML file. 4.2.1.2.2. Deploying the OpenShift Virtualization Operator by using the CLI You can deploy the OpenShift Virtualization Operator by using the oc CLI. Prerequisites Subscribe to the OpenShift Virtualization catalog in the openshift-cnv namespace. Log in as a user with cluster-admin privileges. Procedure Create a YAML file that contains the following manifest: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: Deploy the OpenShift Virtualization Operator by running the following command: USD oc apply -f <file_name>.yaml Verification Ensure that OpenShift Virtualization deployed successfully by watching the PHASE of the cluster service version (CSV) in the openshift-cnv namespace. Run the following command: USD watch oc get csv -n openshift-cnv The following output displays if deployment was successful: Example output NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.17.5 OpenShift Virtualization 4.17.5 Succeeded 4.2.2. steps The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first. 4.3. Uninstalling OpenShift Virtualization You uninstall OpenShift Virtualization by using the web console or the command line interface (CLI) to delete the OpenShift Virtualization workloads, the Operator, and its resources. 4.3.1. Uninstalling OpenShift Virtualization by using the web console You uninstall OpenShift Virtualization by using the web console to perform the following tasks: Delete the HyperConverged CR . Delete the OpenShift Virtualization Operator . Delete the openshift-cnv namespace . Delete the OpenShift Virtualization custom resource definitions (CRDs) . Important You must first delete all virtual machines , and virtual machine instances . You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster. 4.3.1.1. Deleting the HyperConverged custom resource To uninstall OpenShift Virtualization, you first delete the HyperConverged custom resource (CR). Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Select the OpenShift Virtualization Operator. Click the OpenShift Virtualization Deployment tab. Click the Options menu beside kubevirt-hyperconverged and select Delete HyperConverged . Click Delete in the confirmation window. 4.3.1.2. Deleting Operators from a cluster using the web console Cluster administrators can delete installed Operators from a selected namespace by using the web console. Prerequisites You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it. On the right side of the Operator Details page, select Uninstall Operator from the Actions list. An Uninstall Operator? dialog box is displayed. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates. Note This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs. 4.3.1.3. Deleting a namespace using the web console You can delete a namespace by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate to Administration Namespaces . Locate the namespace that you want to delete in the list of namespaces. On the far right side of the namespace listing, select Delete Namespace from the Options menu . When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field. Click Delete . 4.3.1.4. Deleting OpenShift Virtualization custom resource definitions You can delete the OpenShift Virtualization custom resource definitions (CRDs) by using the web console. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate to Administration CustomResourceDefinitions . Select the Label filter and enter operators.coreos.com/kubevirt-hyperconverged.openshift-cnv in the Search field to display the OpenShift Virtualization CRDs. Click the Options menu beside each CRD and select Delete CustomResourceDefinition . 4.3.2. Uninstalling OpenShift Virtualization by using the CLI You can uninstall OpenShift Virtualization by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have deleted all virtual machines and virtual machine instances. You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster. Procedure Delete the HyperConverged custom resource: USD oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv Delete the OpenShift Virtualization Operator subscription: USD oc delete subscription kubevirt-hyperconverged -n openshift-cnv Delete the OpenShift Virtualization ClusterServiceVersion resource: USD oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv Delete the OpenShift Virtualization namespace: USD oc delete namespace openshift-cnv List the OpenShift Virtualization custom resource definitions (CRDs) by running the oc delete crd command with the dry-run option: USD oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv Example output Delete the CRDs by running the oc delete crd command without the dry-run option: USD oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv Additional resources Deleting virtual machines Deleting virtual machine instances
|
[
"Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)",
"Memory overhead per infrastructure node ~ 150 MiB",
"Memory overhead per worker node ~ 360 MiB",
"Memory overhead per virtual machine ~ (1.002 x requested memory) + 218 MiB \\ 1 + 8 MiB x (number of vCPUs) \\ 2 + 16 MiB x (number of graphics devices) \\ 3 + (additional memory overhead) 4",
"CPU overhead for infrastructure nodes ~ 4 cores",
"CPU overhead for worker nodes ~ 2 cores + CPU overhead per virtual machine",
"Aggregated storage overhead per node ~ 10 GiB",
"apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.17.5 channel: \"stable\" 1",
"oc apply -f <file name>.yaml",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:",
"oc apply -f <file_name>.yaml",
"watch oc get csv -n openshift-cnv",
"NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.17.5 OpenShift Virtualization 4.17.5 Succeeded",
"oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv",
"oc delete subscription kubevirt-hyperconverged -n openshift-cnv",
"oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv",
"oc delete namespace openshift-cnv",
"oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv",
"customresourcedefinition.apiextensions.k8s.io \"cdis.cdi.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hostpathprovisioners.hostpathprovisioner.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hyperconvergeds.hco.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"kubevirts.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"ssps.ssp.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"tektontasks.tektontasks.kubevirt.io\" deleted (dry run)",
"oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/virtualization/installing
|
Chapter 6. Configuring cascading replication using the command line
|
Chapter 6. Configuring cascading replication using the command line In a cascading replication scenario, one server, a hub, acts both as a consumer and a supplier. The hub is a read-only replica that maintains a changelog. It receives updates from the supplier and supplies these updates to a consumer. Use cascading replication for balancing heavy traffic loads or to keep suppliers based locally in geographically-distributed environments. 6.1. Preparing the new hub server using the command line To prepare the hub.example.com host, enable replication. This process: Configures the role of this server in the replication topology Defines the suffix that is replicated Creates the replication manager account the supplier uses to connect to this host Perform this procedure on the hub that you want to add to the replication topology. Prerequisites You installed the Directory Server instance. The database for the dc=example,dc=com suffix exists. Procedure Enable replication for the dc=example,dc=com suffix: # dsconf -D "cn=Directory Manager" ldap://hub.example.com replication enable --suffix "dc=example,dc=com" --role "hub" --bind-dn "cn=replication manager,cn=config" --bind-passwd "password" This command configures the hub.example.com host as a hub for the dc=example,dc=com suffix. Additionally, the command creates the cn=replication manager,cn=config user with the specified password and allows this account to replicate changes for the suffix to this host. Verification Display the replication configuration: # dsconf -D "cn=Directory Manager" ldap://hub.example.com replication get --suffix "dc=example,dc=com" dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config ... nsDS5ReplicaBindDN: cn=replication manager,cn=config nsDS5ReplicaRoot: dc=example,dc=com nsDS5ReplicaType: 2 nsDS5ReplicaId: 65535 ... These parameters indicate: nsDS5ReplicaBindDN specifies the replication manager account. nsDS5ReplicaRoot sets the suffix that is replicated. nsDS5ReplicaType set to 2 defines that this host is a consumer, which is also valid for a hub. nsDS5ReplicaId set to 65535 defines that this host is a hub. The dsconf utility automatically sets this value if you define the --role "hub" option. Additional resources Installing Red Hat Directory Server Storing suffixes in separate databases cn=replica,cn=suffix_DN,cn=mapping tree,cn=config 6.2. Configuring the existing server as a supplier to the hub server using the command line To prepare the existing server as a supplier, you need to: Enable replication for the suffix. Create a replication agreement to the hub. Initialize the hub. Perform this procedure on the existing supplier in the replication topology. Prerequisites You enabled replication for the dc=example,dc=com suffix on the hub to join. Procedure Enable replication for the dc=example,dc=com suffix: # [command]`dsconf -D "cn=Directory Manager" ldap://supplier.example.com replication enable --suffix "dc=example,dc=com" --role "supplier" --replica-id 1 This command configures the supplier.example.com host as a supplier for the dc=example,dc=com suffix, and sets the replica ID of this entry to 1 . Important The replica ID must be a unique integer between 1 and 65534 for a suffix across all suppliers in the topology. Add the replication agreement and initialize the new server: # dsconf -D "cn=Directory Manager" ldap://supplier.example.com repl-agmt create --suffix "dc=example,dc=com" --host "hub.example.com" --port 389 --conn-protocol LDAP --bind-dn "cn=replication manager,cn=config" --bind-passwd "password" --bind-method SIMPLE --init example-agreement-supplier-to-hub This command creates a replication agreement named example-agreement-supplier-to-hub . The replication agreement defines settings, such as the hub's host name, protocol, and authentication information that the supplier uses when connecting and replicating data to the hub. After the agreement was created, Directory Server initializes hub.example.com . Depending on the amount of data to replicate, initialization can be time-consuming. Verification Display the replication configuration: # dsconf -D "cn=Directory Manager" ldap://supplier.example.com replication get --suffix "dc=example,dc=com" dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config ... nsDS5ReplicaRoot: dc=example,dc=com nsDS5ReplicaType: 3 ... These parameters indicate: nsDS5ReplicaRoot sets the suffix that is replicated. nsDS5ReplicaType set to 3 defines that this host is a supplier. Verify whether the initialization was successful: # dsconf -D "cn=Directory Manager" ldap://supplier.example.com repl-agmt init-status --suffix "dc=example,dc=com" example-agreement-supplier-to-hub Agreement successfully initialized. Display the replication status: # dsconf -D "cn=Directory Manager" ldap://supplier.example.com repl-agmt status --suffix "dc=example,dc=com" example-agreement-supplier-to-hub Status For Agreement: "example-agreement-supplier-to-hub" (hub.example.com:389) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20210331105030Z Last Update End: 20210331105030Z Number Of Changes Sent: 0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 20210331105026Z Last Init End: 20210331105029Z Last Init Status: Error (0) Total update succeeded Reap Active: 0 Replication Status: Not in Synchronization: supplier (Unknown) consumer (Unknown) State (green) Reason (error (0) replica acquired successfully: incremental update succeeded) Replication Lag Time: Unavailable Verify the Replication Status and Last Update Status fields. Troubleshooting By default, the replication idle timeout for all agreements on a server is 1 hour. If the initialization of large databases fails due to timeouts, set the nsslapd-idletimeout parameter to a higher value. For example, to set the parameter to 7200 (2 hours), enter: # dsconf -D "cn=Directory Manager" ldap://supplier1.example.com config replace nsslapd-idletimeout=7200 To set an unlimited period, set nsslapd-idletimeout to 0 . Additional resources cn=replica,cn=suffix_DN,cn=mapping tree,cn=config 6.3. Preparing the new consumer of the hub using the command line To prepare the consumer.example.com host, enable replication. This process: Configures the role of this server in the replication topology Defines the suffix that is replicated Creates the replication manager account the hub uses to connect to this host Perform this procedure on the consumer that you want to add to the replication topology. Prerequisites You installed the Directory Server instance. For details, see Setting up a new instance on the command line using a .inf file . The database for the dc=example,dc=com suffix exists. Procedure Enable replication for the dc=example,dc=com suffix: # dsconf -D "cn=Directory Manager" ldap://consumer.example.com replication enable --suffix "dc=example,dc=com" --role "consumer" --bind-dn "cn=replication manager,cn=config" --bind-passwd "password" This command configures the consumer.example.com host as a consumer for the dc=example,dc=com suffix. Additionally, the command creates the cn=replication manager,cn=config user with the specified password and allows this account to replicate changes for the suffix to this host. Verification Display the replication configuration: # dsconf -D "cn=Directory Manager" ldap://consumer.example.com replication get --suffix "dc=example,dc=com" dn: cn=replica,cn=dc\3Dexample\2Cdc\3Dcom,cn=mapping tree,cn=config ... nsDS5ReplicaBindDN: cn=replication manager,cn=config nsDS5ReplicaRoot: dc=example,dc=com nsDS5ReplicaType: 2 ... These parameters indicate: nsDS5ReplicaBindDN specifies the replication manager account. nsDS5ReplicaRoot sets the suffix that is replicated. nsDS5ReplicaType set to 2 defines that this host is a consumer. Additional resources Installing Red Hat Directory Server Storing suffixes in separate databases cn=replica,cn=suffix_DN,cn=mapping tree,cn=config 6.4. Configuring the hub server as a supplier for the consumer using the command line To prepare the hub, you need to: Create a replication agreement to the consumer. Initialize the consumer. Perform this procedure on the hub in the replication topology. Prerequisites The hub is initialized, and replication from the supplier to the hub works. You enabled replication for the dc=example,dc=com suffix on the hub. Procedure Add the replication agreement and initialize the consumer: # dsconf -D "cn=Directory Manager" ldap://hub.example.com repl-agmt create --suffix "dc=example,dc=com" --host "consumer.example.com" --port 389 --conn-protocol LDAP --bind-dn "cn=replication manager,cn=config" --bind-passwd "password" --bind-method SIMPLE --init example-agreement-hub-to-consumer This command creates a replication agreement named example-agreement-hub-to-consumer . The replication agreement defines settings, such as the consumer's host name, protocol, and authentication information that the supplier uses when connecting and replicating data to this consumer. After the agreement was created, Directory Server initializes consumer.example.com . Depending on the amount of data to replicate, initialization can be time-consuming. Verification Verify whether the initialization was successful: # dsconf -D "cn=Directory Manager" ldap://hub.example.com repl-agmt init-status --suffix "dc=example,dc=com" example-agreement-hub-to-consumer Agreement successfully initialized. Display the replication status: # dsconf -D "cn=Directory Manager" ldap://hub.example.com repl-agmt status --suffix "dc=example,dc=com" example-agreement-hub-to-consumer Status For Agreement: "example-agreement-hub-to-consumer" (consumer.example.com:389) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20210331131534Z Last Update End: 20210331131534Z Number Of Changes Sent: 0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 20210331131530Z Last Init End: 20210331131533Z Last Init Status: Error (0) Total update succeeded Reap Active: 0 Replication Status: Not in Synchronization: supplier (Unknown) consumer (Unknown) State (green) Reason (error (0) replica acquired successfully: incremental update succeeded) Replication Lag Time: Unavailable Verify the Replication Status and Last Update Status fields. Troubleshooting By default, the replication idle timeout for all agreements on a server is 1 hour. If the initialization of large databases fails due to timeouts, set the nsslapd-idletimeout parameter to a higher value. For example, to set the parameter to 7200 (2 hours), enter: # dsconf -D "cn=Directory Manager" ldap://hub .example.com config replace nsslapd-idletimeout=7200 To set an unlimited period, set nsslapd-idletimeout to 0 .
|
[
"dsconf -D \"cn=Directory Manager\" ldap://hub.example.com replication enable --suffix \"dc=example,dc=com\" --role \"hub\" --bind-dn \"cn=replication manager,cn=config\" --bind-passwd \"password\"",
"dsconf -D \"cn=Directory Manager\" ldap://hub.example.com replication get --suffix \"dc=example,dc=com\" dn: cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config nsDS5ReplicaBindDN: cn=replication manager,cn=config nsDS5ReplicaRoot: dc=example,dc=com nsDS5ReplicaType: 2 nsDS5ReplicaId: 65535",
"[command]`dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com replication enable --suffix \"dc=example,dc=com\" --role \"supplier\" --replica-id 1",
"dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt create --suffix \"dc=example,dc=com\" --host \"hub.example.com\" --port 389 --conn-protocol LDAP --bind-dn \"cn=replication manager,cn=config\" --bind-passwd \"password\" --bind-method SIMPLE --init example-agreement-supplier-to-hub",
"dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com replication get --suffix \"dc=example,dc=com\" dn: cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config nsDS5ReplicaRoot: dc=example,dc=com nsDS5ReplicaType: 3",
"dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt init-status --suffix \"dc=example,dc=com\" example-agreement-supplier-to-hub Agreement successfully initialized.",
"dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com repl-agmt status --suffix \"dc=example,dc=com\" example-agreement-supplier-to-hub Status For Agreement: \"example-agreement-supplier-to-hub\" (hub.example.com:389) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20210331105030Z Last Update End: 20210331105030Z Number Of Changes Sent: 0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 20210331105026Z Last Init End: 20210331105029Z Last Init Status: Error (0) Total update succeeded Reap Active: 0 Replication Status: Not in Synchronization: supplier (Unknown) consumer (Unknown) State (green) Reason (error (0) replica acquired successfully: incremental update succeeded) Replication Lag Time: Unavailable",
"dsconf -D \"cn=Directory Manager\" ldap://supplier1.example.com config replace nsslapd-idletimeout=7200",
"dsconf -D \"cn=Directory Manager\" ldap://consumer.example.com replication enable --suffix \"dc=example,dc=com\" --role \"consumer\" --bind-dn \"cn=replication manager,cn=config\" --bind-passwd \"password\"",
"dsconf -D \"cn=Directory Manager\" ldap://consumer.example.com replication get --suffix \"dc=example,dc=com\" dn: cn=replica,cn=dc\\3Dexample\\2Cdc\\3Dcom,cn=mapping tree,cn=config nsDS5ReplicaBindDN: cn=replication manager,cn=config nsDS5ReplicaRoot: dc=example,dc=com nsDS5ReplicaType: 2",
"dsconf -D \"cn=Directory Manager\" ldap://hub.example.com repl-agmt create --suffix \"dc=example,dc=com\" --host \"consumer.example.com\" --port 389 --conn-protocol LDAP --bind-dn \"cn=replication manager,cn=config\" --bind-passwd \"password\" --bind-method SIMPLE --init example-agreement-hub-to-consumer",
"dsconf -D \"cn=Directory Manager\" ldap://hub.example.com repl-agmt init-status --suffix \"dc=example,dc=com\" example-agreement-hub-to-consumer Agreement successfully initialized.",
"dsconf -D \"cn=Directory Manager\" ldap://hub.example.com repl-agmt status --suffix \"dc=example,dc=com\" example-agreement-hub-to-consumer Status For Agreement: \"example-agreement-hub-to-consumer\" (consumer.example.com:389) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20210331131534Z Last Update End: 20210331131534Z Number Of Changes Sent: 0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 20210331131530Z Last Init End: 20210331131533Z Last Init Status: Error (0) Total update succeeded Reap Active: 0 Replication Status: Not in Synchronization: supplier (Unknown) consumer (Unknown) State (green) Reason (error (0) replica acquired successfully: incremental update succeeded) Replication Lag Time: Unavailable",
"dsconf -D \"cn=Directory Manager\" ldap://hub .example.com config replace nsslapd-idletimeout=7200"
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_and_managing_replication/assembly_configuring-cascading-replication-using-the-command-line_configuring-and-managing-replication
|
Chapter 2. The pcsd Web UI
|
Chapter 2. The pcsd Web UI This chapter provides an overview of configuring a Red Hat High Availability cluster with the pcsd Web UI. 2.1. pcsd Web UI Setup To set up your system to use the pcsd Web UI to configure a cluster, use the following procedure. Install the Pacemaker configuration tools, as described in Section 1.2, "Installing Pacemaker configuration tools" . On each node that will be part of the cluster, use the passwd command to set the password for user hacluster , using the same password on each node. Start and enable the pcsd daemon on each node: On one node of the cluster, authenticate the nodes that will constitute the cluster with the following command. After executing this command, you will be prompted for a Username and a Password . Specify hacluster as the Username . On any system, open a browser to the following URL, specifying one of the nodes you have authorized (note that this uses the https protocol). This brings up the pcsd Web UI login screen. Log in as user hacluster . This brings up the Manage Clusters page as shown in Figure 2.1, "Manage Clusters page" . Figure 2.1. Manage Clusters page
|
[
"systemctl start pcsd.service systemctl enable pcsd.service",
"pcs cluster auth node1 node2 ... nodeN",
"https:// nodename :2224"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-pcsd-haar
|
Chapter 2. Configuring the Compute service (nova)
|
Chapter 2. Configuring the Compute service (nova) As a cloud administrator, you use environment files to customize the Compute (nova) service. Puppet generates and stores this configuration in the /var/lib/config-data/puppet-generated/<nova_container>/etc/nova/nova.conf file. Use the following configuration methods to customize the Compute service configuration, in the following order of precedence: Heat parameters - as detailed in the Compute (nova) Parameters section in the Overcloud Parameters guide. The following example uses heat parameters to set the default scheduler filters, and configure an NFS backend for the Compute service: Puppet parameters - as defined in /etc/puppet/modules/nova/manifests/* : Note Only use this method if an equivalent heat parameter does not exist. Manual hieradata overrides - for customizing parameters when no heat or Puppet parameter exists. For example, the following sets the timeout_nbd in the [DEFAULT] section on the Compute role: Warning If a heat parameter exists, use it instead of the Puppet parameter. If a Puppet parameter exists, but not a heat parameter, use the Puppet parameter instead of the manual override method. Use the manual override method only if there is no equivalent heat or Puppet parameter. Tip Follow the guidance in Identifying parameters that you want to modify to determine if a heat or Puppet parameter is available for customizing a particular configuration. For more information about how to configure overcloud services, see Heat parameters in the Director Installation and Usage guide. 2.1. Configuring memory for overallocation When you use memory overcommit ( NovaRAMAllocationRatio >= 1.0), you need to deploy your overcloud with enough swap space to support the allocation ratio. Note If your NovaRAMAllocationRatio parameter is set to < 1 , follow the RHEL recommendations for swap size. For more information, see Recommended system swap space in the RHEL Managing Storage Devices guide. Prerequisites You have calculated the swap size your node requires. For more information, see Calculating swap size . Procedure Copy the /usr/share/openstack-tripleo-heat-templates/environments/enable-swap.yaml file to your environment file directory: Configure the swap size by adding the following parameters to your enable-swap.yaml file: Add the enable_swap.yaml environment file to the stack with your other environment files and deploy the overcloud: 2.2. Calculating reserved host memory on Compute nodes To determine the total amount of RAM to reserve for host processes, you need to allocate enough memory for each of the following: The resources that run on the host, for example, OSD consumes 3 GB of memory. The emulator overhead required to host instances. The hypervisor for each instance. After you calculate the additional demands on memory, use the following formula to help you determine the amount of memory to reserve for host processes on each node: Replace vm_no with the number of instances. Replace avg_instance_size with the average amount of memory each instance can use. Replace overhead with the hypervisor overhead required for each instance. Replace resource1 and all resources up to <resourcen> with the number of a resource type on the node. Replace resource_ram with the amount of RAM each resource of this type requires. 2.3. Calculating swap size The allocated swap size must be large enough to handle any memory overcommit. You can use the following formulas to calculate the swap size your node requires: overcommit_ratio = NovaRAMAllocationRatio - 1 Minimum swap size (MB) = (total_RAM * overcommit_ratio) + RHEL_min_swap Recommended (maximum) swap size (MB) = total_RAM * (overcommit_ratio + percentage_of_RAM_to_use_for_swap) The percentage_of_RAM_to_use_for_swap variable creates a buffer to account for QEMU overhead and any other resources consumed by the operating system or host services. For instance, to use 25% of the available RAM for swap, with 64GB total RAM, and NovaRAMAllocationRatio set to 1 : Recommended (maximum) swap size = 64000 MB * (0 + 0.25) = 16000 MB For information about how to calculate the NovaReservedHostMemory value, see Calculating reserved host memory on Compute nodes . For information about how to determine the RHEL_min_swap value, see Recommended system swap space in the RHEL Managing Storage Devices guide.
|
[
"parameter_defaults: NovaNfsEnabled: true NovaNfsOptions: \"context=system_u:object_r:nfs_t:s0\" NovaNfsShare: \"192.0.2.254:/export/nova\" NovaNfsVersion: \"4.2\" NovaSchedulerEnabledFilters: - AggregateInstanceExtraSpecsFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter",
"parameter_defaults: ComputeExtraConfig: nova::compute::force_raw_images: True",
"parameter_defaults: ComputeExtraConfig: nova::config::nova_config: DEFAULT/timeout_nbd: value: '20'",
"cp /usr/share/openstack-tripleo-heat-templates/environments/enable-swap.yaml /home/stack/templates/enable-swap.yaml",
"parameter_defaults: swap_size_megabytes: <swap size in MB> swap_path: <full path to location of swap, default: /swap>",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/enable-swap.yaml",
"NovaReservedHostMemory = total_RAM - ( (vm_no * (avg_instance_size + overhead)) + (resource1 * resource_ram) + (resourcen * resource_ram))"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-the-compute-service_osp
|
Chapter 4. Creating an OpenShift route to access a Kafka cluster
|
Chapter 4. Creating an OpenShift route to access a Kafka cluster Create an OpenShift route to access a Kafka cluster outside of OpenShift. This procedure describes how to expose a Kafka cluster to clients outside the OpenShift environment. After the Kafka cluster is exposed, external clients can produce and consume messages from the Kafka cluster. To create an OpenShift route, a route listener is added to the configuration of a Kafka cluster installed on OpenShift. Warning An OpenShift Route address includes the name of the Kafka cluster, the name of the listener, and the name of the namespace it is created in. For example, my-cluster-kafka-listener1-bootstrap-streams-kafka (<cluster_name>-kafka-<listener_name>-bootstrap-<namespace>). Be careful that the whole length of the address does not exceed a maximum limit of 63 characters. Prerequisites You have created a Kafka cluster on OpenShift . You need the OpenJDK keytool to manage certificates. (Optional) You can perform some of the steps using the OpenShift oc CLI tool. Procedure Navigate in the web console to the Operators > Installed Operators page and select Streams for Apache Kafka to display the operator details. Select the Kafka page to show the installed Kafka clusters. Click the name of the Kafka cluster you are configuring to view its details. We use a Kafka cluster named my-cluster in this example. Select the YAML page for the Kafka cluster my-cluster . Add route listener configuration to create an OpenShift route named listener1 . The listener configuration must be set to the route type. You add the listener configuration under listeners in the Kafka configuration. External route listener configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: streams-kafka spec: kafka: # ... listeners: # ... - name: listener1 port: 9094 type: route tls: true # ... The client connects on port 443, the default router port, but traffic is then routed to the port you configure, which is 9094 in this example. Save the updated configuration. Select the Resources page for the Kafka cluster my-cluster to locate the connection information you will need for your client. From the Resources page, you'll find details for the route listener and the public cluster certificate you need to connect to the Kafka cluster. Click the name of the my-cluster-kafka-listener1-bootstrap route created for the Kafka cluster to show the route details. Make a note of the hostname. The hostname is specified with port 443 in a Kafka client as the bootstrap address for connecting to the Kafka cluster. You can also locate the bootstrap address by navigating to Networking > Routes and selecting the streams-kafka project to display the routes created in the namespace. Or you can use the oc tool to extract the bootstrap details. Extracting bootstrap information oc get routes my-cluster-kafka-listener1-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}' Navigate back to the Resources page and click the name of the my-cluster-cluster-ca-cert to show the secret details for accessing the Kafka cluster. The ca.crt certificate file contains the public certificate of the Kafka cluster. You will need the certificate to access the Kafka broker. Make a local copy of the ca.crt public certificate file. You can copy the details of the certificate or use the OpenShift oc tool to extract them. Extracting the public certificate oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt Create a local truststore for the public cluster certificate using keytool . Creating a local truststore keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt When prompted, create a password for accessing the truststore. The truststore is specified in a Kafka client for authenticating access to the Kafka cluster. You are now ready to start sending and receiving messages.
|
[
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: streams-kafka spec: kafka: # listeners: # - name: listener1 port: 9094 type: route tls: true",
"get routes my-cluster-kafka-listener1-bootstrap -o=jsonpath='{.status.ingress[0].host}{\"\\n\"}'",
"extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt",
"keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/getting_started_with_streams_for_apache_kafka_on_openshift/proc-creating-route-str
|
Chapter 6. Installing an IdM server: Without integrated DNS, with an integrated CA as the root CA
|
Chapter 6. Installing an IdM server: Without integrated DNS, with an integrated CA as the root CA This chapter describes how you can install a new Identity Management (IdM) server without integrated DNS. Note Red Hat strongly recommends installing IdM-integrated DNS for basic usage within the IdM deployment: When the IdM server also manages DNS, there is tight integration between DNS and native IdM tools which enables automating some of the DNS record management. For more details, see Planning your DNS services and host names . 6.1. Interactive installation During the interactive installation using the ipa-server-install utility, you are asked to supply basic configuration of the system, for example the realm, the administrator's password and the Directory Manager's password. The ipa-server-install installation script creates a log file at /var/log/ipaserver-install.log . If the installation fails, the log can help you identify the problem. This procedure installs a server: Without integrated DNS With integrated Identity Management (IdM) certificate authority (CA) as the root CA, which is the default CA configuration Procedure Run the ipa-server-install utility. The script prompts to configure an integrated DNS service. Press Enter to select the default no option. The script prompts for several required settings and offers recommended default values in brackets. To accept a default value, press Enter . To provide a custom value, enter the required value. Warning Plan these names carefully. You will not be able to change them after the installation is complete. Enter the passwords for the Directory Server superuser ( cn=Directory Manager ) and for the IdM administration system user account ( admin ). The script prompts for several required settings and offers recommended default values in brackets. To accept a default value, press Enter . To provide a custom value, enter the required value. Enter yes to confirm the server configuration. The installation script now configures the server. Wait for the operation to complete. The installation script produces a file with DNS resource records: the /tmp/ipa.system.records.UFRPto.db file in the example output below. Add these records to the existing external DNS servers. The process of updating the DNS records varies depending on the particular DNS solution. Important The server installation is not complete until you add the DNS records to the existing DNS servers. Additional resources For more information about the DNS resource records you must add to your DNS system, see IdM DNS records for external DNS systems . 6.2. Non-interactive installation You can install a server without integrated DNS or with integrated Identity Management (IdM) certificate authority (CA) as the root CA, which is the default CA configuration. Note The ipa-server-install installation script creates a log file at /var/log/ipaserver-install.log . If the installation fails, the log can help you identify the problem. Procedure Run the ipa-server-install utility with the options to supply all the required information. The minimum required options for non-interactive installation are: --realm to provide the Kerberos realm name --ds-password to provide the password for the Directory Manager (DM), the Directory Server super user --admin-password to provide the password for admin , the IdM administrator --unattended to let the installation process select default options for the host name and domain name For example: The installation script produces a file with DNS resource records: the /tmp/ipa.system.records.UFRPto.db file in the example output below. Add these records to the existing external DNS servers. The process of updating the DNS records varies depending on the particular DNS solution. Important The server installation is not complete until you add the DNS records to the existing DNS servers. Additional resources For more information about the DNS resource records you must add to your DNS system, see IdM DNS records for external DNS systems . For a complete list of options accepted by ipa-server-install , run the ipa-server-install --help command. 6.3. IdM DNS records for external DNS systems After installing an IdM server without integrated DNS, you must add LDAP and Kerberos DNS resource records for the IdM server to your external DNS system. The ipa-server-install installation script generates a file containing the list of DNS resource records with a file name in the format /tmp/ipa.system.records. <random_characters> .db and prints instructions to add those records: This is an example of the contents of the file: Note After adding the LDAP and Kerberos DNS resource records for the IdM server to your DNS system, ensure that the DNS management tools have not added PTR records for ipa-ca . The presence of PTR records for ipa-ca in your DNS could cause subsequent IdM replica installations to fail.
|
[
"ipa-server-install",
"Do you want to configure integrated DNS (BIND)? [no]:",
"Server host name [server.idm.example.com]: Please confirm the domain name [idm.example.com]: Please provide a realm name [IDM.EXAMPLE.COM]:",
"Directory Manager password: IPA admin password:",
"NetBIOS domain name [EXAMPLE]: Do you want to configure chrony with NTP server or pool address? [no]:",
"Continue to configure the system with these values? [no]: yes",
"Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server",
"ipa-server-install --realm IDM.EXAMPLE.COM --ds-password DM_password --admin-password admin_password --unattended",
"Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server",
"Please add records in this file to your DNS system: /tmp/ipa.system.records.6zdjqxh3.db",
"_kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos-master._udp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos._tcp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos._udp.example.com. 86400 IN SRV 0 100 88 server.example.com. _kerberos.example.com. 86400 IN TXT \"EXAMPLE.COM\" _kpasswd._tcp.example.com. 86400 IN SRV 0 100 464 server.example.com. _kpasswd._udp.example.com. 86400 IN SRV 0 100 464 server.example.com. _ldap._tcp.example.com. 86400 IN SRV 0 100 389 server.example.com."
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/installing-an-ipa-server-without-integrated-dns_installing-identity-management
|
4.6. Red Hat Virtualization Windows Guest VSS Support
|
4.6. Red Hat Virtualization Windows Guest VSS Support The Red Hat Virtualization Backup and Restore API provides integration with Microsoft Windows Volume Shadow Copy Service (VSS) using qemu-ga . The VSS provider registration is made in the guest level as part of the Guest Tools deployment. qemu-ga provides VSS support and live snapshots attempt to quiesce whenever possible.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/red_hat_enterprise_virtualization_windows_guest_vss_support
|
Chapter 4. Red Hat Quay organizations overview
|
Chapter 4. Red Hat Quay organizations overview In = Red Hat Quay an organization is a grouping of users, repositories, and teams. It provides a means to organize and manage access control and permissions within the registry. With organizations, administrators can assign roles and permissions to users and teams. Other useful information about organizations includes the following: You cannot have an organization embedded within another organization. To subdivide an organization, you use teams. Organizations cannot contain users directly. You must first add a team, and then add one or more users to each team. Note Individual users can be added to specific repositories inside of an organization. Consequently, those users are not members of any team on the Repository Settings page. The Collaborators View on the Teams and Memberships page shows users who have direct access to specific repositories within the organization without needing to be part of that organization specifically. Teams can be set up in organizations as just members who use the repositories and associated images, or as administrators with special privileges for managing the Organization. Users can create their own organization to share repositories of container images. This can be done through the Red Hat Quay UI, or by the Red Hat Quay API if you have an OAuth token. 4.1. Creating an organization by using the UI Use the following procedure to create a new organization by using the UI. Procedure Log in to your Red Hat Quay registry. Click Organization in the navigation pane. Click Create Organization . Enter an Organization Name , for example, testorg . Enter an Organization Email . Click Create . Now, your example organization should populate under the Organizations page. 4.2. Creating an organization by using the Red Hat Quay API Use the following procedure to create a new organization using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to create a new organization using the POST /api/v1/organization/ endpoint: USD curl -X POST -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" -d '{ "name": "<new_organization_name>" }' "https://<quay-server.example.com>/api/v1/organization/" Example output "Created" After creation, organization details can be changed, such as adding an email address, with the PUT /api/v1/organization/{orgname} command. For example: USD curl -X PUT "https://<quay-server.example.com>/api/v1/organization/<orgname>" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "email": "<org_email>", "invoice_email": <true/false>, "invoice_email_address": "<billing_email>" }' Example output {"name": "test", "email": "[email protected]", "avatar": {"name": "test", "hash": "a15d479002b20f211568fd4419e76686d2b88a4980a5b4c4bc10420776c5f6fe", "color": "#aec7e8", "kind": "user"}, "is_admin": true, "is_member": true, "teams": {"owners": {"name": "owners", "description": "", "role": "admin", "avatar": {"name": "owners", "hash": "6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90", "color": "#c7c7c7", "kind": "team"}, "can_view": true, "repo_count": 0, "member_count": 1, "is_synced": false}}, "ordered_teams": ["owners"], "invoice_email": true, "invoice_email_address": "[email protected]", "tag_expiration_s": 1209600, "is_free_account": true, "quotas": [{"id": 2, "limit_bytes": 10737418240, "limits": [{"id": 1, "type": "Reject", "limit_percent": 90}]}], "quota_report": {"quota_bytes": 0, "configured_quota": 10737418240, "running_backfill": "complete", "backfill_status": "complete"}} 4.3. Organization settings With = Red Hat Quay, some basic organization settings can be adjusted by using the UI. This includes adjusting general settings, such as the e-mail address associated with the organization, and time machine settings, which allows administrators to adjust when a tag is garbage collected after it is permanently deleted. Use the following procedure to alter your organization settings by using the v2 UI. Procedure On the v2 UI, click Organizations . Click the name of the organization that you will create the robot account for, for example, test-org . Click the Settings tab. Optional. Enter the email address associated with the organization. Optional. Set the allotted time for the Time Machine feature to one of the following: A few seconds A day 7 days 14 days A month Click Save . 4.4. Deleting an organization by using the UI Use the following procedure to delete an organization using the v2 UI. Procedure On the Organizations page, select the name of the organization you want to delete, for example, testorg . Click the More Actions drop down menu. Click Delete . Note On the Delete page, there is a Search input box. With this box, users can search for specific organizations to ensure that they are properly scheduled for deletion. For example, if a user is deleting 10 organizations and they want to ensure that a specific organization was deleted, they can use the Search input box to confirm said organization is marked for deletion. Confirm that you want to permanently delete the organization by typing confirm in the box. Click Delete . After deletion, you are returned to the Organizations page. Note You can delete more than one organization at a time by selecting multiple organizations, and then clicking More Actions Delete . 4.5. Deleting an organization by using the Red Hat Quay API Use the following procedure to delete an organization using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to delete an organization using the DELETE /api/v1/organization/{orgname} endpoint: USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "https://<quay-server.example.com>/api/v1/organization/<organization_name>" The CLI does not return information when deleting an organization from the CLI. To confirm deletion, you can check the Red Hat Quay UI, or you can enter the GET /api/v1/organization/{orgname} command to see if details are returned for the deleted organization: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>" Example output {"detail": "Not Found", "error_message": "Not Found", "error_type": "not_found", "title": "not_found", "type": "http://<quay-server.example.com>/api/v1/error/not_found", "status": 404}
|
[
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"<new_organization_name>\" }' \"https://<quay-server.example.com>/api/v1/organization/\"",
"\"Created\"",
"curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"email\": \"<org_email>\", \"invoice_email\": <true/false>, \"invoice_email_address\": \"<billing_email>\" }'",
"{\"name\": \"test\", \"email\": \"[email protected]\", \"avatar\": {\"name\": \"test\", \"hash\": \"a15d479002b20f211568fd4419e76686d2b88a4980a5b4c4bc10420776c5f6fe\", \"color\": \"#aec7e8\", \"kind\": \"user\"}, \"is_admin\": true, \"is_member\": true, \"teams\": {\"owners\": {\"name\": \"owners\", \"description\": \"\", \"role\": \"admin\", \"avatar\": {\"name\": \"owners\", \"hash\": \"6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90\", \"color\": \"#c7c7c7\", \"kind\": \"team\"}, \"can_view\": true, \"repo_count\": 0, \"member_count\": 1, \"is_synced\": false}}, \"ordered_teams\": [\"owners\"], \"invoice_email\": true, \"invoice_email_address\": \"[email protected]\", \"tag_expiration_s\": 1209600, \"is_free_account\": true, \"quotas\": [{\"id\": 2, \"limit_bytes\": 10737418240, \"limits\": [{\"id\": 1, \"type\": \"Reject\", \"limit_percent\": 90}]}], \"quota_report\": {\"quota_bytes\": 0, \"configured_quota\": 10737418240, \"running_backfill\": \"complete\", \"backfill_status\": \"complete\"}}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>\"",
"{\"detail\": \"Not Found\", \"error_message\": \"Not Found\", \"error_type\": \"not_found\", \"title\": \"not_found\", \"type\": \"http://<quay-server.example.com>/api/v1/error/not_found\", \"status\": 404}"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/use_red_hat_quay/organizations-overview
|
Chapter 8. Checking for Local Storage Operator deployments
|
Chapter 8. Checking for Local Storage Operator deployments Red Hat OpenShift Data Foundation clusters with Local Storage Operator are deployed using local storage devices. To find out if your existing cluster with OpenShift Data Foundation was deployed using local storage devices, use the following procedure: Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure By checking the storage class associated with your OpenShift Data Foundation cluster's persistent volume claims (PVCs), you can tell if your cluster was deployed using local storage devices. Check the storage class associated with OpenShift Data Foundation cluster's PVCs with the following command: Check the output. For clusters with Local Storage Operators, the PVCs associated with ocs-deviceset use the storage class localblock . The output looks similar to the following: Additional Resources Deploying OpenShift Data Foundation using local storage devices on VMware Deploying OpenShift Data Foundation using local storage devices on Red Hat Virtualization Deploying OpenShift Data Foundation using local storage devices on bare metal Deploying OpenShift Data Foundation using local storage devices on IBM Power
|
[
"oc get pvc -n openshift-storage",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-d96c747b-2ab5-47e2-b07e-1079623748d8 50Gi RWO ocs-storagecluster-ceph-rbd 114s ocs-deviceset-0-0-lzfrd Bound local-pv-7e70c77c 1769Gi RWO localblock 2m10s ocs-deviceset-1-0-7rggl Bound local-pv-b19b3d48 1769Gi RWO localblock 2m10s ocs-deviceset-2-0-znhk8 Bound local-pv-e9f22cdc 1769Gi RWO localblock 2m10s"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/troubleshooting_openshift_data_foundation/checking-for-local-storage-operator-deployments_rhodf
|
Chapter 45. Infinispan
|
Chapter 45. Infinispan Both producer and consumer are supported This component allows you to interact with Infinispan distributed data grid / cache using the Hot Rod procol. Infinispan is an extremely scalable, highly available key/value data store and data grid platform written in Java. 45.1. Dependencies When using infinispan with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-infinispan-starter</artifactId> </dependency> 45.2. URI format The producer allows sending messages to a remote cache using the HotRod protocol. The consumer allows listening for events from a remote cache using the HotRod protocol. 45.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 45.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 45.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 45.4. Component Options The Infinispan component supports 26 options, which are listed below. Name Description Default Type configuration (common) Component configuration. InfinispanRemoteConfiguration hosts (common) Specifies the host of the cache on Infinispan instance. String queryBuilder (common) Specifies the query builder. InfinispanQueryBuilder secure (common) Define if we are connecting to a secured Infinispan instance. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean customListener (consumer) Returns the custom listener in use, if provided. InfinispanRemoteCustomListener eventTypes (consumer) Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CLIENT_CACHE_ENTRY_CREATED, CLIENT_CACHE_ENTRY_MODIFIED, CLIENT_CACHE_ENTRY_REMOVED, CLIENT_CACHE_ENTRY_EXPIRED, CLIENT_CACHE_FAILOVER. String defaultValue (producer) Set a specific default value for some producer operations. Object key (producer) Set a specific key for producer operations. Object lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean oldValue (producer) Set a specific old value for some producer operations. Object operation (producer) The operation to perform. Enum values: PUT PUTASYNC PUTALL PUTALLASYNC PUTIFABSENT PUTIFABSENTASYNC GET GETORDEFAULT CONTAINSKEY CONTAINSVALUE REMOVE REMOVEASYNC REPLACE REPLACEASYNC SIZE CLEAR CLEARASYNC QUERY STATS COMPUTE COMPUTEASYNC PUT InfinispanOperation value (producer) Set a specific value for producer operations. Object password ( security) Define the password to access the infinispan instance. String saslMechanism ( security) Define the SASL Mechanism to access the infinispan instance. String securityRealm ( security) Define the security realm to access the infinispan instance. String securityServerName ( security) Define the security server name to access the infinispan instance. String username ( security) Define the username to access the infinispan instance. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean cacheContainer (advanced) Autowired Specifies the cache Container to connect. RemoteCacheManager cacheContainerConfiguration (advanced) Autowired The CacheContainer configuration. Used if the cacheContainer is not defined. Configuration configurationProperties (advanced) Implementation specific properties for the CacheManager. Map configurationUri (advanced) An implementation specific URI for the CacheManager. String flags (advanced) A comma separated list of org.infinispan.client.hotrod.Flag to be applied by default on each cache invocation. String remappingFunction (advanced) Set a specific remappingFunction to use in a compute operation. BiFunction resultHeader (advanced) Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String 45.5. Endpoint Options The Infinispan endpoint is configured using URI syntax: with the following path and query parameters: 45.5.1. Path Parameters (1 parameters) Name Description Default Type cacheName (common) Required The name of the cache to use. Use current to use the existing cache name from the currently configured cached manager. Or use default for the default cache manager name. String 45.5.2. Query Parameters (26 parameters) Name Description Default Type hosts (common) Specifies the host of the cache on Infinispan instance. String queryBuilder (common) Specifies the query builder. InfinispanQueryBuilder secure (common) Define if we are connecting to a secured Infinispan instance. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean customListener (consumer) Returns the custom listener in use, if provided. InfinispanRemoteCustomListener eventTypes (consumer) Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CLIENT_CACHE_ENTRY_CREATED, CLIENT_CACHE_ENTRY_MODIFIED, CLIENT_CACHE_ENTRY_REMOVED, CLIENT_CACHE_ENTRY_EXPIRED, CLIENT_CACHE_FAILOVER. String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern defaultValue (producer) Set a specific default value for some producer operations. Object key (producer) Set a specific key for producer operations. Object lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean oldValue (producer) Set a specific old value for some producer operations. Object operation (producer) The operation to perform. Enum values: PUT PUTASYNC PUTALL PUTALLASYNC PUTIFABSENT PUTIFABSENTASYNC GET GETORDEFAULT CONTAINSKEY CONTAINSVALUE REMOVE REMOVEASYNC REPLACE REPLACEASYNC SIZE CLEAR CLEARASYNC QUERY STATS COMPUTE COMPUTEASYNC PUT InfinispanOperation value (producer) Set a specific value for producer operations. Object password ( security) Define the password to access the infinispan instance. String saslMechanism ( security) Define the SASL Mechanism to access the infinispan instance. String securityRealm ( security) Define the security realm to access the infinispan instance. String securityServerName ( security) Define the security server name to access the infinispan instance. String username ( security) Define the username to access the infinispan instance. String cacheContainer (advanced) Autowired Specifies the cache Container to connect. RemoteCacheManager cacheContainerConfiguration (advanced) Autowired The CacheContainer configuration. Used if the cacheContainer is not defined. Configuration configurationProperties (advanced) Implementation specific properties for the CacheManager. Map configurationUri (advanced) An implementation specific URI for the CacheManager. String flags (advanced) A comma separated list of org.infinispan.client.hotrod.Flag to be applied by default on each cache invocation. String remappingFunction (advanced) Set a specific remappingFunction to use in a compute operation. BiFunction resultHeader (advanced) Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String 45.6. Camel Operations This section lists all available operations, along with their header information. Table 45.1. Table 1. Put Operations Operation Name Description InfinispanOperation.PUT Puts a key/value pair in the cache, optionally with expiration InfinispanOperation.PUTASYNC Asynchronously puts a key/value pair in the cache, optionally with expiration InfinispanOperation.PUTIFABSENT Puts a key/value pair in the cache if it did not exist, optionally with expiration InfinispanOperation.PUTIFABSENTASYNC Asynchronously puts a key/value pair in the cache if it did not exist, optionally with expiration Required Headers : CamelInfinispanKey CamelInfinispanValue Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Result Header : CamelInfinispanOperationResult Table 45.2. Table 2. Put All Operations Operation Name Description InfinispanOperation.PUTALL Adds multiple entries to a cache, optionally with expiration CamelInfinispanOperation.PUTALLASYNC Asynchronously adds multiple entries to a cache, optionally with expiration Required Headers : CamelInfinispanMap Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Table 45.3. Table 3. Get Operations Operation Name Description InfinispanOperation.GET Retrieves the value associated with a specific key from the cache InfinispanOperation.GETORDEFAULT Retrieves the value, or default value, associated with a specific key from the cache Required Headers : CamelInfinispanKey Table 45.4. Table 4. Contains Key Operation Operation Name Description InfinispanOperation.CONTAINSKEY Determines whether a cache contains a specific key Required Headers CamelInfinispanKey Result Header CamelInfinispanOperationResult Table 45.5. Table 5. Contains Value Operation Operation Name Description InfinispanOperation.CONTAINSVALUE Determines whether a cache contains a specific value Required Headers : CamelInfinispanKey Table 45.6. Table 6. Remove Operations Operation Name Description InfinispanOperation.REMOVE Removes an entry from a cache, optionally only if the value matches a given one InfinispanOperation.REMOVEASYNC Asynchronously removes an entry from a cache, optionally only if the value matches a given one Required Headers : CamelInfinispanKey Optional Headers : CamelInfinispanValue Result Header : CamelInfinispanOperationResult Table 45.7. Table 7. Replace Operations Operation Name Description InfinispanOperation.REPLACE Conditionally replaces an entry in the cache, optionally with expiration InfinispanOperation.REPLACEASYNC Asynchronously conditionally replaces an entry in the cache, optionally with expiration Required Headers : CamelInfinispanKey CamelInfinispanValue CamelInfinispanOldValue Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Result Header : CamelInfinispanOperationResult Table 45.8. Table 8. Clear Operations Operation Name Description InfinispanOperation.CLEAR Clears the cache InfinispanOperation.CLEARASYNC Asynchronously clears the cache Table 45.9. Table 9. Size Operation Operation Name Description InfinispanOperation.SIZE Returns the number of entries in the cache Result Header CamelInfinispanOperationResult Table 45.10. Table 10. Stats Operation Operation Name Description InfinispanOperation.STATS Returns statistics about the cache Result Header : CamelInfinispanOperationResult Table 45.11. Table 11. Query Operation Operation Name Description InfinispanOperation.QUERY Executes a query on the cache Required Headers : CamelInfinispanQueryBuilder Result Header : CamelInfinispanOperationResult Note Write methods like put(key, value) and remove(key) do not return the value by default. 45.7. Message Headers Name Default Value Type Context Description CamelInfinispanCacheName null String Shared The cache participating in the operation or event. CamelInfinispanOperation PUT InfinispanOperation Producer The operation to perform. CamelInfinispanMap null Map Producer A Map to use in case of CamelInfinispanOperationPutAll operation CamelInfinispanKey null Object Shared The key to perform the operation to or the key generating the event. CamelInfinispanValue null Object Producer The value to use for the operation. CamelInfinispanEventType null String Consumer The type of the received event. CamelInfinispanLifespanTime null long Producer The Lifespan time of a value inside the cache. Negative values are interpreted as infinity. CamelInfinispanTimeUnit null String Producer The Time Unit of an entry Lifespan Time. CamelInfinispanMaxIdleTime null long Producer The maximum amount of time an entry is allowed to be idle for before it is considered as expired. CamelInfinispanMaxIdleTimeUnit null String Producer The Time Unit of an entry Max Idle Time. CamelInfinispanQueryBuilder null InfinispanQueryBuilder Producer The QueryBuilde to use for QUERY command, if not present the command defaults to InifinispanConfiguration's one CamelInfinispanOperationResultHeader null String Producer Store the operation result in a header instead of the message body 45.8. Examples Put a key/value into a named cache: from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) (1) .setHeader(InfinispanConstants.KEY).constant("123") (2) .to("infinispan:myCacheName&cacheContainer=#cacheContainer"); (3) Where, 1 - Set the operation to perform 2 - Set the key used to identify the element in the cache 3 - Use the configured cache manager cacheContainer from the registry to put an element to the cache named myCacheName It is possible to configure the lifetime and/or the idle time before the entry expires and gets evicted from the cache, as example: from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant("123") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) (1) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) (2) .to("infinispan:myCacheName"); where, 1 - Set the lifespan of the entry 2 - Set the time unit for the lifespan Queries from("direct:start") .setHeader(InfinispanConstants.OPERATION, InfinispanConstants.QUERY) .setHeader(InfinispanConstants.QUERY_BUILDER, new InfinispanQueryBuilder() { @Override public Query build(QueryFactory<Query> qf) { return qf.from(User.class).having("name").like("%abc%").build(); } }) .to("infinispan:myCacheName?cacheContainer=#cacheManager") ; Note The .proto descriptors for domain objects must be registered with the remote Data Grid server, see Remote Query Example in the official Infinispan documentation. Custom Listeners from("infinispan://?cacheContainer=#cacheManager&customListener=#myCustomListener") .to("mock:result"); The instance of myCustomListener must exist and Camel should be able to look it up from the Registry . Users are encouraged to extend the org.apache.camel.component.infinispan.remote.InfinispanRemoteCustomListener class and annotate the resulting class with @ClientListener which can be found found in package org.infinispan.client.hotrod.annotation . 45.9. Using the Infinispan based idempotent repository In this section we will use the Infinispan based idempotent repository. Java Example InfinispanRemoteConfiguration conf = new InfinispanRemoteConfiguration(); (1) conf.setHosts("localhost:1122") InfinispanRemoteIdempotentRepository repo = new InfinispanRemoteIdempotentRepository("idempotent"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from("direct:start") .idempotentConsumer(header("MessageID"), repo) (3) .to("mock:result"); } }); where, 1 - Configure the cache 2 - Configure the repository bean 3 - Set the repository to the route XML Example <bean id="infinispanRepo" class="org.apache.camel.component.infinispan.remote.InfinispanRemoteIdempotentRepository" destroy-method="stop"> <constructor-arg value="idempotent"/> (1) <property name="configuration"> (2) <bean class="org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration"> <property name="hosts" value="localhost:11222"/> </bean> </property> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start" /> <idempotentConsumer messageIdRepositoryRef="infinispanRepo"> (3) <header>MessageID</header> <to uri="mock:result" /> </idempotentConsumer> </route> </camelContext> where, 1 - Set the name of the cache that will be used by the repository 2 - Configure the repository bean 3 - Set the repository to the route 45.10. Using the Infinispan based aggregation repository In this section we will use the Infinispan based aggregation repository. Java Example InfinispanRemoteConfiguration conf = new InfinispanRemoteConfiguration(); (1) conf.setHosts("localhost:1122") InfinispanRemoteAggregationRepository repo = new InfinispanRemoteAggregationRepository(); (2) repo.setCacheName("aggregation"); repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from("direct:start") .aggregate(header("MessageID")) .completionSize(3) .aggregationRepository(repo) (3) .aggregationStrategyRef("myStrategy") .to("mock:result"); } }); where, 1 - Configure the cache 2 - Create the repository bean 3 - Set the repository to the route XML Example <bean id="infinispanRepo" class="org.apache.camel.component.infinispan.remote.InfinispanRemoteAggregationRepository" destroy-method="stop"> <constructor-arg value="aggregation"/> (1) <property name="configuration"> (2) <bean class="org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration"> <property name="hosts" value="localhost:11222"/> </bean> </property> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start" /> <aggregate strategyRef="myStrategy" completionSize="3" aggregationRepositoryRef="infinispanRepo"> (3) <correlationExpression> <header>MessageID</header> </correlationExpression> <to uri="mock:result"/> </aggregate> </route> </camelContext> where, 1 - Set the name of the cache that will be used by the repository 2 - Configure the repository bean 3 - Set the repository to the route Note With the release of Infinispan 11, it is required to set the encoding configuration on any cache created. This is critical for consuming events too. For more information have a look at Data Encoding and MediaTypes in the official Infinispan documentation. 45.11. Spring Boot Auto-Configuration The component supports 23 options, which are listed below. Name Description Default Type camel.component.infinispan.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.infinispan.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.infinispan.cache-container Specifies the cache Container to connect. The option is a org.infinispan.client.hotrod.RemoteCacheManager type. RemoteCacheManager camel.component.infinispan.cache-container-configuration The CacheContainer configuration. Used if the cacheContainer is not defined. The option is a org.infinispan.client.hotrod.configuration.Configuration type. Configuration camel.component.infinispan.configuration Component configuration. The option is a org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration type. InfinispanRemoteConfiguration camel.component.infinispan.configuration-properties Implementation specific properties for the CacheManager. Map camel.component.infinispan.configuration-uri An implementation specific URI for the CacheManager. String camel.component.infinispan.custom-listener Returns the custom listener in use, if provided. The option is a org.apache.camel.component.infinispan.remote.InfinispanRemoteCustomListener type. InfinispanRemoteCustomListener camel.component.infinispan.enabled Whether to enable auto configuration of the infinispan component. This is enabled by default. Boolean camel.component.infinispan.event-types Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CLIENT_CACHE_ENTRY_CREATED, CLIENT_CACHE_ENTRY_MODIFIED, CLIENT_CACHE_ENTRY_REMOVED, CLIENT_CACHE_ENTRY_EXPIRED, CLIENT_CACHE_FAILOVER. String camel.component.infinispan.flags A comma separated list of org.infinispan.client.hotrod.Flag to be applied by default on each cache invocation. String camel.component.infinispan.hosts Specifies the host of the cache on Infinispan instance. String camel.component.infinispan.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.infinispan.operation The operation to perform. InfinispanOperation camel.component.infinispan.password Define the password to access the infinispan instance. String camel.component.infinispan.query-builder Specifies the query builder. The option is a org.apache.camel.component.infinispan.InfinispanQueryBuilder type. InfinispanQueryBuilder camel.component.infinispan.remapping-function Set a specific remappingFunction to use in a compute operation. The option is a java.util.function.BiFunction type. BiFunction camel.component.infinispan.result-header Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String camel.component.infinispan.sasl-mechanism Define the SASL Mechanism to access the infinispan instance. String camel.component.infinispan.secure Define if we are connecting to a secured Infinispan instance. false Boolean camel.component.infinispan.security-realm Define the security realm to access the infinispan instance. String camel.component.infinispan.security-server-name Define the security server name to access the infinispan instance. String camel.component.infinispan.username Define the username to access the infinispan instance. String
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-infinispan-starter</artifactId> </dependency>",
"infinispan://cacheName?[options]",
"infinispan:cacheName",
"from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) (1) .setHeader(InfinispanConstants.KEY).constant(\"123\") (2) .to(\"infinispan:myCacheName&cacheContainer=#cacheContainer\"); (3)",
"from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant(\"123\") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) (1) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) (2) .to(\"infinispan:myCacheName\");",
"from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION, InfinispanConstants.QUERY) .setHeader(InfinispanConstants.QUERY_BUILDER, new InfinispanQueryBuilder() { @Override public Query build(QueryFactory<Query> qf) { return qf.from(User.class).having(\"name\").like(\"%abc%\").build(); } }) .to(\"infinispan:myCacheName?cacheContainer=#cacheManager\") ;",
"from(\"infinispan://?cacheContainer=#cacheManager&customListener=#myCustomListener\") .to(\"mock:result\");",
"InfinispanRemoteConfiguration conf = new InfinispanRemoteConfiguration(); (1) conf.setHosts(\"localhost:1122\") InfinispanRemoteIdempotentRepository repo = new InfinispanRemoteIdempotentRepository(\"idempotent\"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from(\"direct:start\") .idempotentConsumer(header(\"MessageID\"), repo) (3) .to(\"mock:result\"); } });",
"<bean id=\"infinispanRepo\" class=\"org.apache.camel.component.infinispan.remote.InfinispanRemoteIdempotentRepository\" destroy-method=\"stop\"> <constructor-arg value=\"idempotent\"/> (1) <property name=\"configuration\"> (2) <bean class=\"org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration\"> <property name=\"hosts\" value=\"localhost:11222\"/> </bean> </property> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\" /> <idempotentConsumer messageIdRepositoryRef=\"infinispanRepo\"> (3) <header>MessageID</header> <to uri=\"mock:result\" /> </idempotentConsumer> </route> </camelContext>",
"InfinispanRemoteConfiguration conf = new InfinispanRemoteConfiguration(); (1) conf.setHosts(\"localhost:1122\") InfinispanRemoteAggregationRepository repo = new InfinispanRemoteAggregationRepository(); (2) repo.setCacheName(\"aggregation\"); repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from(\"direct:start\") .aggregate(header(\"MessageID\")) .completionSize(3) .aggregationRepository(repo) (3) .aggregationStrategyRef(\"myStrategy\") .to(\"mock:result\"); } });",
"<bean id=\"infinispanRepo\" class=\"org.apache.camel.component.infinispan.remote.InfinispanRemoteAggregationRepository\" destroy-method=\"stop\"> <constructor-arg value=\"aggregation\"/> (1) <property name=\"configuration\"> (2) <bean class=\"org.apache.camel.component.infinispan.remote.InfinispanRemoteConfiguration\"> <property name=\"hosts\" value=\"localhost:11222\"/> </bean> </property> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\" /> <aggregate strategyRef=\"myStrategy\" completionSize=\"3\" aggregationRepositoryRef=\"infinispanRepo\"> (3) <correlationExpression> <header>MessageID</header> </correlationExpression> <to uri=\"mock:result\"/> </aggregate> </route> </camelContext>"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-infinispan-component-starter
|
9.6. Transport Considerations
|
9.6. Transport Considerations Although, you can find information about all JBoss Data Virtualization settings using the Management CLI (see Section 10.1, "JBoss Data Virtualization Settings" ), this section provides some additional information about those settings related to transports. JBoss Data Virtualization provides three transports by default: odbc, jdbc and embedded. Transport settings (such as those listed below) are configured for each. max-socket-threads Default is 0. Determines the maximum number of threads dedicated to the initial request processing. Zero indicates to use the system default of maximum available processors. Socket threads handle NIO non-blocking IO operations as well as directly servicing any operation that can run without blocking. For longer running operations, the socket threads queue works with the query engine. (The query engine has two properties that determine its thread utilization: max-threads and max-active-plans .) All JDBC/ODBC socket operations are non-blocking, so setting the number of max-socket-threads higher than the maximum effective parallelism of the machine should not result in greater performance. input-buffer-size Default is 0 which will use the system default. Before adjusting input-buffer-size for any of the transports, keep in mind that each client will create a new socket connection. Increasing this value should only be done if the number of clients is constrained. output-buffer-size Default is 0 which will use the system default. Before adjusting output-buffer-size for any of the transports, keep in mind that each client will create a new socket connection. Increasing this value should only be done if the number of clients is constrained. JDBC clients may need to adjust low-level transport values, in addition to SSL client connection properties via a teiid-client-settings.properties file placed in the client application's classpath. (An example file can be found within the EAP_HOME /modules/system/layers/base/org/jboss/teiid/client/main/teiid-client- VERSION .jar file.) Note Typical installations will not need to have any of these settings adjusted.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/transport_considerations
|
Chapter 9. Caching images for faster workspace start
|
Chapter 9. Caching images for faster workspace start This section describes installing the Image Puller on a CodeReady Workspaces cluster to cache images on cluster nodes. 9.1. Image Puller overview Slow starts of Red Hat CodeReady Workspaces workspaces may be caused by waiting for the underlying cluster to pull images used in workspaces from remote registries. As such, pre-pulling images can improve start times significantly. The Image Puller can be used to pre-pull images and shorten workspace start times. The Image Puller is an additional deployment that runs alongside Red Hat CodeReady Workspaces. Given a list of images to pre-pull, the application runs inside a cluster and creates a DaemonSet that pulls the images on each node. Note The minimal requirement for an image to be pre-pulled is the availability of the sleep command, which means that FROM scratch images (for example, 'che-machine-exec') are currently not supported. Also, images that mount volumes in the dockerfile are not supported for pre-pulling on OpenShift. The application can be deployed: using OperatorHub or installing the kubernetes image puller operator by processing and applying OpenShift templates. The Image Puller pulls its configuration from a ConfigMap with the following available parameters: Table 9.1. Image Puller default parameters Parameter Usage Default CACHING_INTERVAL_HOURS Interval, in hours, between checking health of DaemonSets "1" CACHING_MEMORY_REQUEST The memory request for each cached image when the puller is running 10Mi CACHING_MEMORY_LIMIT The memory limit for each cached image when the puller is running 20Mi CACHING_CPU_REQUEST The CPU request for each cached image when the puller is running .05 CACHING_CPU_LIMIT The CPU limit for each cached image when the puller is running .2 DAEMONSET_NAME Name of DaemonSet to be created kubernetes-image-puller NAMESPACE Namespace where DaemonSet is to be created k8s-image-puller IMAGES List of images to be cached, in the format <name>=<image>;... Contains a default list of images. Before deploying, fill this with the images that fit the current requirements NODE_SELECTOR Node selector applied to the Pods created by the DaemonSet '{}' The default memory requests and limits ensure that the container has enough memory to start. When changing CACHING_MEMORY_REQUEST or CACHING_MEMORY_LIMIT , you will need to consider the total memory allocated to the DaemonSet Pods in the cluster: (memory limit) * (number of images) * (number of nodes in the cluster) For example, running the image puller that caches 5 images on 20 nodes, with a container memory limit of 20Mi requires 2000Mi of memory. 9.2. Deploying Image Puller using the operator The recommended way to deploy the Image Puller is through the operator . 9.2.1. Installing the Image Puller on OpenShift using OperatorHub First, create a namespace in your cluster to host the image puller. Our example will use the namespace "image-puller". Navigate to your OpenShift cluster's console, and select Operators . Select OperatorHub and type "image puller" into the "Filter by keyword.." search bar. Click the OpenShift Image Puller Operator , click Continue and click Install . At the Installation Mode selection, choose A specific namespace on the cluster , and use the drop-down to find the namespace you created to install the image puller. Click Subscribe . Wait for the OpenShift Image Puller Operator to install, and click the installation. Click the OpenShiftImagePuller tab, and then click Create instance . You will be taken to a screen with a YAML editor with a OpenShiftImagePuller Custom Resource. Make any modifications to the Custom resource and click Create . Navigate to the Workloads and Pods menu in the namespace that the image puller was installed, and you should see pods being created. 9.2.2. Installing the Image Puller on OpenShift using the Operator Create a namespace to host the kubernetes image puller, and apply the following manifests from the GitHub repository : export NAMESPACE=<namespace you created to host the image puller> oc apply -f https://raw.githubusercontent.com/che-incubator/kubernetes-image-puller-operator/master/deploy/crds/che.eclipse.org_kubernetesimagepullers_crd.yaml -n USDNAMESPACE oc apply -f https://raw.githubusercontent.com/che-incubator/kubernetes-image-puller-operator/master/deploy/role.yaml -n USDNAMESPACE oc apply -f https://raw.githubusercontent.com/che-incubator/kubernetes-image-puller-operator/master/deploy/role_binding.yaml -n USDNAMESPACE oc apply -f https://raw.githubusercontent.com/che-incubator/kubernetes-image-puller-operator/master/deploy/service_account.yaml -n USDNAMESPACE oc apply -f https://raw.githubusercontent.com/che-incubator/kubernetes-image-puller-operator/master/deploy/operator.yaml -n USDNAMESPACE Then create a OpenShiftImagePuller Custom Resource: apiVersion: che.eclipse.org/v1alpha1 kind: KubernetesImagePuller metadata: name: image-puller namespace: <namespace you installed the image puller in> spec: configMapName: k8s-image-puller daemonsetName: k8s-image-puller deploymentName: kubernetes-image-puller images: >- java11-maven=quay.io/eclipse/che-java11-maven:nightly;che-theia=quay.io/eclipse/che-theia: 9.3. Deploying Image Puller using OpenShift templates The Image Puller repository contains OpenShift templates for deploying on OpenShift. Prerequisites A running OpenShift cluster. The oc binary file. The following parameters are available to further configure the OpenShift templates: Table 9.2. Parameters for installing with OpenShift templates Value Usage Default DAEMONSET_NAME The value of DAEMONSET_NAME to set in the ConfigMap kubernetes-image-puller IMAGE Image used for the kubernetes-image-puller deployment registry.redhat.io/codeready-workspaces/imagepuller-rhel8 IMAGE_TAG The image tag to pull 2.1 SERVICEACCOUNT_NAME The name of the ServiceAccount used by the deployment (created as part of installation) k8s-image-puller CACHING_INTERVAL_HOURS The value of CACHING_INTERVAL_HOURS to set in the ConfigMap "1" CACHING_INTERVAL_REQUEST The value of CACHING_MEMORY_REQUEST to set in the ConfigMap "10Mi" CACHING_INTERVAL_LIMIT The value of CACHING_MEMORY_LIMIT to set in the ConfigMap "20Mi"` NODE_SELECTOR The value of NODE_SELECTOR to set in the ConfigMap "{}" See Table 9.1, "Image Puller default parameters" for more information about configuration values, such as DAEMONSET_NAME , CACHING_INTERVAL_HOURS , and CACHING_MEMORY_REQUEST . Table 9.3. List of recommended images to pre-pull Image URL Tag stacks-java-rhel8 registry.access.redhat.com/codeready-workspaces/stacks-java-rhel8 2.1 theia-rhel8 registry.access.redhat.com/codeready-workspaces/theia-rhel8 2.1 stacks-golang-rhel8 registry.access.redhat.com/codeready-workspaces/stacks-golang-rhel8 2.1 stacks-node-rhel8 registry.access.redhat.com/codeready-workspaces/stacks-node-rhel8 2.1 theia-endpoint-rhel8 registry.access.redhat.com/codeready-workspaces/theia-rhel8 2.1 pluginbroker-metadata-rhel8 registry.access.redhat.com/codeready-workspaces/pluginbroker-metadata-rhel8 2.1 pluginbroker-artifacts-rhel8 registry.access.redhat.com/codeready-workspaces/pluginbroker-artifacts-rhel8 2.1 See Table 9.1, "Image Puller default parameters" for more information about configuration values, such as DAEMONSET_NAME , CACHING_INTERVAL_HOURS , and CACHING_MEMORY_REQUEST . Procedure Installing Clone the kubernetes-image-puller repository: Create a new OpenShift project to deploy the puller into: Process and apply the templates to deploy the puller: In CodeReady Workspaces you must use custom values to deploy the image puller. To set custom values, add to the oc process an option: -p <parameterName> = <value> : Verifying the installation Confirm that a new deployment, kubernetes-image-puller , and a DaemonSet (named based on the value of the DAEMONSET_NAME parameter) exist. The DaemonSet needs to have a Pod for each node in the cluster: USD oc get deployment,daemonset,pod --namespace k8s-image-puller deployment.extensions/kubernetes-image-puller 1/1 1 1 2m19s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.extensions/kubernetes-image-puller 1 1 1 1 1 <none> 2m10s NAME READY STATUS RESTARTS AGE pod/kubernetes-image-puller-5495f46497-mkd4p 1/1 Running 0 2m18s pod/kubernetes-image-puller-n8bmf 3/3 Running 0 2m10s Check that the ConfigMap named k8s-image-puller has the values you specified in your parameter substitution, or that they contain the default values:
|
[
"export NAMESPACE=<namespace you created to host the image puller> apply -f https://raw.githubusercontent.com/che-incubator/kubernetes-image-puller-operator/master/deploy/crds/che.eclipse.org_kubernetesimagepullers_crd.yaml -n USDNAMESPACE apply -f https://raw.githubusercontent.com/che-incubator/kubernetes-image-puller-operator/master/deploy/role.yaml -n USDNAMESPACE apply -f https://raw.githubusercontent.com/che-incubator/kubernetes-image-puller-operator/master/deploy/role_binding.yaml -n USDNAMESPACE apply -f https://raw.githubusercontent.com/che-incubator/kubernetes-image-puller-operator/master/deploy/service_account.yaml -n USDNAMESPACE apply -f https://raw.githubusercontent.com/che-incubator/kubernetes-image-puller-operator/master/deploy/operator.yaml -n USDNAMESPACE",
"apiVersion: che.eclipse.org/v1alpha1 kind: KubernetesImagePuller metadata: name: image-puller namespace: <namespace you installed the image puller in> spec: configMapName: k8s-image-puller daemonsetName: k8s-image-puller deploymentName: kubernetes-image-puller images: >- java11-maven=quay.io/eclipse/che-java11-maven:nightly;che-theia=quay.io/eclipse/che-theia:next",
"git clone https://github.com/che-incubator/kubernetes-image-puller cd kubernetes-image-puller",
"oc new-project k8s-image-puller",
"oc process -f deploy/serviceaccount.yaml | oc apply -f - oc process -f deploy/configmap.yaml -p IMAGES='stacks-java-rhel8=registry.access.redhat.com/codeready-workspaces/stacks-java-rhel8:2.1; theia-rhel8=registry.access.redhat.com/codeready-workspaces/theia-rhel8:2.1; stacks-golang-rhel8=registry.access.redhat.com/codeready-workspaces/stacks-golang-rhel8:2.1; stacks-node-rhel8=registry.access.redhat.com/codeready-workspaces/stacks-node-rhel8:2.1; theia-endpoint-rhel8=registry.access.redhat.com/codeready-workspaces/theia-rhel8:2.1; pluginbroker-metadata-rhel8=egistry.access.redhat.com/codeready-workspaces/pluginbroker-metadata-rhel8:2.1; pluginbroker-artifacts-rhel8=registry.access.redhat.com/codeready-workspaces/pluginbroker-artifacts-rhel8:2.1;' | oc apply -f - oc process -f deploy/app.yaml -p IMAGE=registry.redhat.io/codeready-workspaces/imagepuller-rhel8 -p IMAGE_TAG='2.1' | oc apply -f -",
"oc get deployment,daemonset,pod --namespace k8s-image-puller deployment.extensions/kubernetes-image-puller 1/1 1 1 2m19s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.extensions/kubernetes-image-puller 1 1 1 1 1 <none> 2m10s NAME READY STATUS RESTARTS AGE pod/kubernetes-image-puller-5495f46497-mkd4p 1/1 Running 0 2m18s pod/kubernetes-image-puller-n8bmf 3/3 Running 0 2m10s",
"oc get configmap k8s-image-puller --output yaml apiVersion: v1 data: CACHING_INTERVAL_HOURS: \"1\" CACHING_MEMORY_LIMIT: 20Mi CACHING_MEMORY_REQUEST: 10Mi DAEMONSET_NAME: kubernetes-image-puller IMAGES: | stacks-java-rhel8=registry.access.redhat.com/codeready-workspaces/stacks-java-rhel8:2.1; theia-rhel8=registry.access.redhat.com/codeready-workspaces/theia-rhel8:2.1; stacks-golang-rhel8=registry.access.redhat.com/codeready-workspaces/stacks-golang-rhel8:2.1; stacks-node-rhel8=registry.access.redhat.com/codeready-workspaces/stacks-node-rhel8:2.1; theia-endpoint-rhel8=registry.access.redhat.com/codeready-workspaces/theia-rhel8:2.1; pluginbroker-metadata-rhel8=egistry.access.redhat.com/codeready-workspaces/pluginbroker-metadata-rhel8:2.1; pluginbroker-artifacts-rhel8=registry.access.redhat.com/codeready-workspaces/pluginbroker-artifacts-rhel8:2.1; NAMESPACE: k8s-image-puller NODE_SELECTOR: '{}' kind: ConfigMap metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"v1\",\"data\":{\"CACHING_INTERVAL_HOURS\":\"1\",\"CACHING_MEMORY_LIMIT\":\"20Mi\",\"CACHING_MEMORY_REQUEST\":\"10Mi\",\"DAEMONSET_NAME\":\"kubernetes-image-puller\",\"IMAGES\":\"stacks-java-rhel8=registry.access.redhat.com/codeready-workspaces/stacks-java-rhel8:2.1; theia-rhel8=registry.access.redhat.com/codeready-workspaces/theia-rhel8:2.1; stacks-golang-rhel8=registry.access.redhat.com/codeready-workspaces/stacks-golang-rhel8:2.1; stacks-node-rhel8=registry.access.redhat.com/codeready-workspaces/stacks-node-rhel8:2.1; theia-endpoint-rhel8=registry.access.redhat.com/codeready-workspaces/theia-rhel8:2.1; pluginbroker-metadata-rhel8=egistry.access.redhat.com/codeready-workspaces/pluginbroker-metadata-rhel8:2.1; pluginbroker-artifacts-rhel8=registry.access.redhat.com/codeready-workspaces/pluginbroker-artifacts-rhel8:2.1;\\n\",\"NAMESPACE\":\"k8s-image-puller\",\"NODE_SELECTOR\":\"{}\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"name\":\"k8s-image-puller\",\"namespace\":\"k8s-image-puller\"},\"type\":\"Opaque\"} creationTimestamp: 2020-02-17T22:40:13Z name: k8s-image-puller namespace: k8s-image-puller resourceVersion: \"72250\" selfLink: /api/v1/namespaces/k8s-image-puller/configmaps/k8s-image-puller uid: 76430ed6-51d6-11ea-9c19-52fdfc072182"
] |
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/administration_guide/caching-images-for-faster-workspace-start_crw
|
Monitoring OpenShift Data Foundation
|
Monitoring OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.13 View cluster health, metrics, or set alerts. Red Hat Storage Documentation Team Abstract Read this document for instructions on monitoring Red Hat OpenShift Data Foundation using the Block and File, and Object dashboards.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/monitoring_openshift_data_foundation/index
|
Appendix A. Versioning information
|
Appendix A. Versioning information Documentation last updated on Thursday, March 14th, 2024.
| null |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_decision_manager/versioning-information
|
Appendix A. Reference material
|
Appendix A. Reference material A.1. About MTA command-line arguments The following is a detailed description of the available MTA command line arguments. Note To run the MTA command without prompting, for example when executing from a script, you must use the following arguments: --overwrite --input --target Example A.1. MTA CLI arguments Command Type Description --analyze-known-libraries Flag to analyze known open-source libraries. --bulk Flag for running multiple analyze commands in bulk, which result in a combined static report. --context-lines Integer Flag to define the number of lines of source code to include in the output for each incident (default: 100 ). -d , --dependency-folders String Array Flag for the directory for dependencies. --enable-default-rulesets Boolean Flag to run default rulesets with analysis (default: true ). -h , --help Flag to output help for analyze --http-proxy String Flag for Hyper Text Transfer Protocol (HTTP) proxy string URL --https-proxy String Flag for Hypertext Transfer Protocol Secure (HTTPS) proxy string URL --incident-selector String Flag to select incidents based on custom variables, for example, !package=io.konveyor.demo.config-utils -i , --input String Flag for the path to application source code or a binary. For more details, see Specifying the input . --jaeger-endpoint String Flag for the jaeger endpoint to collect traces. --json-output Flag to create analysis and dependency output as JSON. -l , --label-selector String Flag to run rules based on a specified label selector expression. --list-providers Flag to list available supported providers. --list-sources Flag to list rules for available migration sources. --list-targets Flag to list rules for available migration targets. --maven-settings string Flag path to a custom Maven settings file to use -m , --mode String Flag for analysis mode, this must be one of full , for source and dependencies , or source-only (default full ). --no-proxy String Flag to excluded URLs from passing through any proxy (relevant only with proxy) -o , --output String Flag for the path to the directory for analysis output. For more details, see Specifying the output directory . --override-provider-settings String Flag to override the provider settings. The analysis pod runs on the host network, and no providers are started. --overwrite Flag to overwrite the output directory. If you do not specify this argument and the --output directory exists, you are prompted to choose whether to overwrite the contents. --provider String Array Flag to specify which provider or providers to run. --rules String Array Flag to specify the filename or directory containing rule files. Use multiple times for additional rules, for example, --rules --rules ... . --run-local Local flag to run analysis directly on local system without containers (for Java and Maven) --skip-static-report Flag to not generate static report. -s , --source String Array Flag for the source technology to consider for analysis. Use multiple times for additional sources, for example, --source --source ... . For more details, see Setting the source technology . -t , --target String Array Flag for the target technology to consider for analysis. Use multiple times for additional targets, for example, --target --target ... . For more details, see Setting the target technology . A.1.1. Specifying the input A space-delimited list of the path to the file or directory containing one or more applications to be analyzed. This argument is required. Usage Depending on whether the input file type provided to the --input argument is a file or directory, it will be evaluated as follows depending on the additional arguments provided. Directory --sourceMode : The directory is evaluated as a single application. File --sourceMode : The file is evaluated as a compressed project. A.1.2. Specifying the output directory Specify the path to the directory to output the report information generated by MTA. Usage If omitted, the report will be generated in an <INPUT_ARCHIVE_OR_DIRECTORY>.report directory. If the output directory exists, you will be prompted with the following question with a default answer of N : However, if you specify the --overwrite argument, MTA will proceed to delete and recreate the directory. See the description of this argument for more information. A.1.3. Setting the source technology A space-delimited list of one or more source technologies, servers, platforms, or frameworks to migrate from. You can use this argument, in conjunction with the --target argument, to determine which rulesets are used. Use the --listSourceTechnologies argument to list all available sources. Usage The --source argument now provides version support, which follows the Maven version range syntax . This instructs MTA to only run the rulesets matching the specified versions, for example, --source eap:5 . Warning When migrating to JBoss EAP, be sure to specify the version, for example, eap:6 . Specifying only eap will run rulesets for all versions of JBoss EAP, including those not relevant to your migration path. See Supported migration paths in Introduction to the Migration Toolkit for Applications for the appropriate JBoss EAP version. A.1.4. Setting the target technology A space-delimited list of one or more target technologies, servers, platforms, or frameworks to migrate to. You can use this argument, in conjunction with the --source argument, to determine which rulesets are used. If you do not specify this option, you are prompted to select a target. Use the --listTargetTechnologies argument to list all available targets. Usage The --target argument now provides version support, which follows the Maven version range syntax . This instructs MTA to only run the rulesets matching the specified versions, for example, --target eap:7 . A.2. Supported technology tags The following technology tags are supported in MTA 7.2.1: 0MQ Client 3scale Acegi Security AcrIS Security ActiveMQ library Airframe Airlift Log Manager AKKA JTA Akka Testkit Amazon SQS Client AMQP Client Anakia AngularFaces ANTLR StringTemplate AOP Alliance Apache Accumulo Client Apache Aries Apache Commons JCS Apache Commons Validator Apache Flume Apache Geronimo Apache Hadoop Apache HBase Client Apache Ignite Apache Karaf Apache Mahout Apache Meecrowave JTA Apache Sirona JTA Apache Synapse Apache Tapestry Apiman Applet Arquillian AspectJ Atomikos JTA Avalon Logkit Axion Driver Axis Axis2 BabbageFaces Bean Validation BeanInject Blaze Blitz4j BootsFaces Bouncy Castle ButterFaces Cache API Cactus Camel Camel Messaging Client Camunda Cassandra Client CDI Cfg Engine Chunk Templates Cloudera Coherence Common Annotations Composite Logging Composite Logging JCL Concordion CSS Cucumber Dagger DbUnit Demoiselle JTA Derby Driver Drools DVSL Dynacache EAR Deployment Easy Rules EasyMock Eclipse RCP EclipseLink Ehcache EJB EJB XML Elasticsearch Entity Bean EtlUnit Eureka Everit JTA Evo JTA Feign File system Logging FormLayoutMaker FreeMarker Geronimo JTA GFC Logging GIN GlassFish JTA Google Guice Grails Grapht DI Guava Testing GWT H2 Driver Hamcrest Handlebars HavaRunner Hazelcast Hdiv Hibernate Hibernate Cfg Hibernate Mapping Hibernate OGM HighFaces HornetQ Client HSQLDB Driver HTTP Client HttpUnit ICEfaces Ickenham Ignite JTA Ikasan iLog Infinispan Injekt for Kotlin Iroh Istio Jamon Jasypt Java EE Batch Java EE Batch API Java EE JACC Java EE JAXB Java EE JAXR Java EE JSON-P Java Transaction API JavaFX JavaScript Javax Inject JAX-RS JAX-WS JayWire JBehave JBoss Cache JBoss EJB XML JBoss logging JBoss Transactions JBoss Web XML JBossMQ Client JBPM JCA Jcabi Log JCache JCunit JDBC JDBC datasources JDBC XA datasources Jersey Jetbrick Template Jetty JFreeChart JFunk JGoodies JMock JMockit JMS Connection Factory JMS Queue JMS Topic JMustache JNA JNI JNLP JPA entities JPA Matchers JPA named queries JPA XML JSecurity JSF JSF Page JSilver JSON-B JSP Page JSTL JTA Jukito JUnit Ka DI Keyczar Kibana KLogger Kodein Kotlin Logging KouInject KumuluzEE JTA LevelDB Client Liferay LiferayFaces Lift JTA Log.io Log4J Log4s Logback Logging Utils Logstash Lumberjack Macros Magicgrouplayout Mail Management EJB MapR MckoiSQLDB Driver Memcached Message (MDB) Micro DI Micrometer Microsoft SQL Driver MiGLayout MinLog Mixer Mockito MongoDB Client Monolog Morphia MRules Mule Mule Functional Test Framework MultithreadedTC Mycontainer JTA MyFaces MySQL Driver Narayana Arjuna Needle Neo4j NLOG4J Nuxeo JTA/JCA OACC OAUTH OCPsoft Logging Utils OmniFaces OpenFaces OpenPojo OpenSAML OpenWS OPS4J Pax Logging Service Oracle ADF Oracle DB Driver Oracle Forms Orion EJB XML Orion Web XML Oscache OTR4J OW2 JTA OW2 Log Util OWASP CSRF Guard OWASP ESAPI Peaberry Pega Persistence units Petals EIP PicketBox PicketLink PicoContainer Play Play Test Plexus Container Polyforms DI Portlet PostgreSQL Driver PowerMock PrimeFaces Properties Qpid Client RabbitMQ Client RandomizedTesting Runner Resource Adapter REST Assured Restito RichFaces RMI RocketMQ Client Rythm Template Engine SAML Santuario Scalate Scaldi Scribe Seam Security Realm ServiceMix Servlet ShiftOne Shiro Silk DI SLF4J Snippetory Template Engine SNMP4J Socket handler logging Spark Specsy Spock Spring Spring Batch Spring Boot Spring Boot Actuator Spring Boot Cache Spring Boot Flo Spring Cloud Config Spring Cloud Function Spring Data Spring Data JPA spring DI Spring Integration Spring JMX Spring Messaging Client Spring MVC Spring Properties Spring Scheduled Spring Security Spring Shell Spring Test Spring Transactions Spring Web SQLite Driver SSL Standard Widget Toolkit (SWT) Stateful (SFSB) Stateless (SLSB) Sticky Configured Stripes Struts SubCut Swagger SwarmCache Swing SwitchYard Syringe Talend ESB Teiid TensorFlow Test Interface TestNG Thymeleaf TieFaces tinylog Tomcat Tornado Inject Trimou Trunk JGuard Twirl Twitter Util Logging UberFire Unirest Unitils Vaadin Velocity Vlad Water Template Engine Web Services Metadata Web Session Web XML File WebLogic Web XML Webmacro WebSocket WebSphere EJB WebSphere EJB Ext WebSphere Web XML WebSphere WS Binding WebSphere WS Extension Weka Weld WF Core JTA Wicket Winter WSDL WSO2 WSS4J XACML XFire XMLUnit Zbus Client Zipkin A.3. About rule story points A.3.1. What are story points? Story points are an abstract metric commonly used in Agile software development to estimate the level of effort needed to implement a feature or change. The Migration Toolkit for Applications uses story points to express the level of effort needed to migrate particular application constructs, and the application as a whole. It does not necessarily translate to man-hours, but the value should be consistent across tasks. A.3.2. How story points are estimated in rules Estimating the level of effort for the story points for a rule can be tricky. The following are the general guidelines MTA uses when estimating the level of effort required for a rule. Level of Effort Story Points Description Information 0 An informational warning with very low or no priority for migration. Trivial 1 The migration is a trivial change or a simple library swap with no or minimal API changes. Complex 3 The changes required for the migration task are complex, but have a documented solution. Redesign 5 The migration task requires a redesign or a complete library change, with significant API changes. Rearchitecture 7 The migration requires a complete rearchitecture of the component or subsystem. Unknown 13 The migration solution is not known and may need a complete rewrite. A.3.3. Task category In addition to the level of effort, you can categorize migration tasks to indicate the severity of the task. The following categories are used to group issues to help prioritize the migration effort. Mandatory The task must be completed for a successful migration. If the changes are not made, the resulting application will not build or run successfully. Examples include replacement of proprietary APIs that are not supported in the target platform. Optional If the migration task is not completed, the application should work, but the results may not be optimal. If the change is not made at the time of migration, it is recommended to put it on the schedule soon after your migration is completed. Potential The task should be examined during the migration process, but there is not enough detailed information to determine if the task is mandatory for the migration to succeed. An example of this would be migrating a third-party proprietary type where there is no directly compatible type. Information The task is included to inform you of the existence of certain files. These may need to be examined or modified as part of the modernization effort, but changes are typically not required. For more information on categorizing tasks, see Using custom rule categories . A.4. Additional Resources A.4.1. Contributing to the project To help the Migration Toolkit for Applications cover most application constructs and server configurations, including yours, you can help with any of the following items: Send an email to [email protected] and let us know what MTA migration rules must cover. Provide example applications to test migration rules. Identify application components and problem areas that might be difficult to migrate: Write a short description of the problem migration areas. Write a brief overview describing how to solve the problem in migration areas. Try Migration Toolkit for Applications on your application. Make sure to report any issues you meet. Contribute to the Migration Toolkit for Applications rules repository: Write a Migration Toolkit for Applications rule to identify or automate a migration process. Create a test for the new rule. For more information, see Rule Development Guide . Contribute to the project source code: Create a core rule. Improve MTA performance or efficiency. Any level of involvement is greatly appreciated! A.4.2. Migration Toolkit for Applications development resources Use the following resources to learn and contribute to the Migration Toolkit for Applications development: MTA forums: https://developer.jboss.org/en/windup Jira issue tracker: https://issues.redhat.com/projects/MTA/issues MTA mailing list: [email protected] A.4.3. Reporting issues MTA uses Jira as its issue tracking system. If you encounter an issue executing MTA, submit a Jira issue . Revised on 2025-02-26 19:46:23 UTC
|
[
"--input <INPUT_ARCHIVE_OR_DIRECTORY> [...]",
"--output <OUTPUT_REPORT_DIRECTORY>",
"Overwrite all contents of \"/home/username/<OUTPUT_REPORT_DIRECTORY>\" (anything already in the directory will be deleted)? [y,N]",
"--source <SOURCE_1> <SOURCE_2>",
"--target <TARGET_1> <TARGET_2>"
] |
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/cli_guide/reference_material
|
Monitoring
|
Monitoring OpenShift Container Platform 4.7 Configuring and using the monitoring stack in OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/monitoring/index
|
Chapter 7. Adjusting IdM Directory Server performance
|
Chapter 7. Adjusting IdM Directory Server performance You can tune the performance of Identity Management's databases by adjusting LDAP attributes controlling the Directory Server's resources and behavior. To adjust how the Directory Server caches data , see the following procedures: Adjusting the entry cache size Adjusting the database index cache size Re-enabling entry and database cache auto-sizing Adjusting the DN cache size Adjusting the normalized DN cache size To adjust the Directory Server's resource limits , see the following procedures: Adjusting the maximum message size Adjusting the maximum number of file descriptors Adjusting the connection backlog size Adjusting the maximum number of database locks Disabling the Transparent Huge Pages feature To adjust timeouts that have the most influence on performance, see the following procedures: Adjusting the input/output block timeout Adjusting the idle connection timeout Adjusting the replication release timeout To install an IdM server or replica with custom Directory Server settings from an LDIF file, see the following procedure: Installing an IdM server or replica with custom database-settings from an LDIF file 7.1. Adjusting the entry cache size Important Red Hat recommends using the built-in cache auto-sizing feature for optimized performance. Only change this value if you need to purposely deviate from the auto-tuned values. The nsslapd-cachememsize attribute specifies the size, in bytes, for the available memory space for the entry cache. This attribute is one of the most important values for controlling how much physical RAM the directory server uses. If the entry cache size is too small, you might see the following error in the Directory Server error logs in the /var/log/dirsrv/slapd- INSTANCE-NAME /errors log file: Red Hat recommends fitting the entry cache and the database index entry cache in memory. Default value 209715200 (200 MiB) Valid range 500000 - 18446744073709551615 (500 kB - (2 64 -1)) Entry DN location cn= database-name ,cn=ldbm database,cn=plugins,cn=config Prerequisites The LDAP Directory Manager password Procedure Disable automatic cache tuning. Display the database suffixes and their corresponding back ends. This command displays the name of the back end database to each suffix. Use the suffix's database name in the step. Set the entry cache size for the database. This example sets the entry cache for the userroot database to 2 gigabytes. Restart the Directory Server. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust cache-memsize to a different value, or re-enable cache auto-sizing. Verification Display the value of the nsslapd-cachememsize attribute and verify it has been set to your desired value. Additional resources nsslapd-cachememsize in Directory Server 11 documentation Re-enabling entry and database cache auto-sizing . 7.2. Adjusting the database index cache size Important Red Hat recommends using the built-in cache auto-sizing feature for optimized performance. Only change this value if you need to purposely deviate from the auto-tuned values. The nsslapd-dbcachesize attribute controls the amount of memory the database indexes use. This cache size has less of an impact on Directory Server performance than the entry cache size does, but if there is available RAM after the entry cache size is set, Red Hat recommends increasing the amount of memory allocated to the database cache. The database cache is limited to 1.5 GB RAM because higher values do not improve performance. Default value 10000000 (10 MB) Valid range 500000 - 1610611911 (500 kB - 1.5GB) Entry DN location cn=config,cn=ldbm database,cn=plugins,cn=config Prerequisites The LDAP Directory Manager password Procedure Disable automatic cache tuning, and set the database cache size. This example sets the database cache to 256 megabytes. Restart the Directory Server. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust dbcachesize to a different value, or re-enable cache auto-sizing. Verification Display the value of the nsslapd-dbcachesize attribute and verify it has been set to your desired value. Additional resources nsslapd-dbcachesize in Directory Server 11 documentation Re-enabling entry and database cache auto-sizing . 7.3. Re-enabling database and entry cache auto-sizing Important Use the built-in cache auto-sizing feature for optimized performance. Do not set cache sizes manually. By default, the IdM Directory Server automatically determines the optimal size for the database cache and entry cache. Auto-sizing sets aside a portion of free RAM and optimizes the size of both caches based on the hardware resources of the server when the instance starts. Use this procedure to undo custom database cache and entry cache values and restore the cache auto-sizing feature to its default values. nsslapd-cache-autosize This settings controls how much free RAM is allocated for auto-sizing the database and entry caches. A value of 0 disables auto-sizing. Default value 10 (10% of free RAM) Valid range 0 - 100 Entry DN location cn=config,cn=ldbm database,cn=plugins,cn=config nsslapd-cache-autosize-split This value sets the percentage of free memory determined by nsslapd-cache-autosize that is used for the database cache. The remaining percentage is used for the entry cache. Default value 25 (25% for the database cache, 60% for the entry cache) Valid range 0 - 100 Entry DN location cn=config,cn=ldbm database,cn=plugins,cn=config Prerequisites You have previously disabled database and entry cache auto-tuning. Procedure Stop the Directory Server. Backup the /etc/dirsrv/ slapd-instance_name /dse.ldif file before making any further modifications. Edit the /etc/dirsrv/ slapd-instance_name /dse.ldif file: Set the percentage of free system RAM to use for the database and entry caches back to the default of 10% of free RAM. Set the percentage used from the free system RAM for the database cache to the default of 25%: Save your changes to the /etc/dirsrv/ slapd-instance_name /dse.ldif file. Start the Directory Server. Verification Display the values of the nsslapd-cache-autosize and nsslapd-cache-autosize-split attributes and verify they have been set to your desired values. Additional resources nsslapd-cache-autosize in Directory Server 11 documentation 7.4. Adjusting the DN cache size Important Red Hat recommends using the built-in cache auto-sizing feature for optimized performance. Only change this value if you need to purposely deviate from the auto-tuned values. The nsslapd-dncachememsize attribute specifies the size, in bytes, for the available memory space for the Distinguished Names (DN) cache. The DN cache is similar to the entry cache for a database, but its table stores only the entry ID and the entry DN, which allows faster lookups for rename and moddn operations. Default value 10485760 (10 MB) Valid range 500000 - 18446744073709551615 (500 kB - (2 64 -1)) Entry DN location cn= database-name ,cn=ldbm database,cn=plugins,cn=config Prerequisites The LDAP Directory Manager password Procedure Optional: Display the database suffixes and their corresponding database names. This command displays the name of the back end database to each suffix. Use the suffix's database name in the step. Set the DN cache size for the database. This example sets the DN cache to 20 megabytes. Restart the Directory Server. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust dncache-memsize to a different value, or back to the default of 10 MB. Verification Display the new value of the nsslapd-dncachememsize attribute and verify it has been set to your desired value. Additional resources nsslapd-dncachememsize in Directory Server 11 documentation 7.5. Adjusting the normalized DN cache size Important Red Hat recommends using the built-in cache auto-sizing feature for optimized performance. Only change this value if you need to purposely deviate from the auto-tuned values. The nsslapd-ndn-cache-max-size attribute controls the size, in bytes, of the cache that stores normalized distinguished names (NDNs). Increasing this value will retain more frequently used DNs in memory. Default value 20971520 (20 MB) Valid range 0 - 2147483647 Entry DN location cn=config Prerequisites The LDAP Directory Manager password Procedure Ensure the NDN cache is enabled. If the cache is off , enable it with the following command. Retrieve the current value of the nsslapd-ndn-cache-max-size parameter and make a note of it before making any adjustments, in case it needs to be restored. Enter the Directory Manager password when prompted. Modify the value of the nsslapd-ndn-cache-max-size attribute. This example increases the value to 41943040 (40 MB). Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust nsslapd-ndn-cache-max-size to a different value, or re-enable cache auto-sizing. Verification Display the new value of the nsslapd-ndn-cache-max-size attribute and verify it has been set to your desired value. Additional resources nsslapd-ndn-cache-max-size in Directory Server 11 documentation 7.6. Adjusting the maximum message size The nsslapd-maxbersize attribute sets the maximum size in bytes allowed for an incoming message or LDAP request. Limiting the size of requests prevents some kinds of denial of service attacks. If the maximum message size is too small, you might see the following error in the Directory Server error logs at /var/log/dirsrv/slapd- INSTANCE-NAME /errors : The limit applies to the total size of the LDAP request. For example, if the request is to add an entry and if the entry in the request is larger than the configured value or the default, then the add request is denied. However, the limit is not applied to replication processes. Be cautious before changing this attribute. Default value 2097152 (2 MB) Valid range 0 - 2147483647 (0 to 2 GB) Entry DN location cn=config Prerequisites The LDAP Directory Manager password Procedure Retrieve the current value of the nsslapd-maxbersize parameter and make a note of it before making any adjustments, in case it needs to be restored. Enter the Directory Manager password when prompted. Modify the value of the nsslapd-maxbersize attribute. This example increases the value to 4194304 , 4 MB. Authenticate as the Directory Manager to make the configuration change. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust nsslapd-maxbersize to a different value, or back to the default of 2097152 . Verification Display the value of the nsslapd-maxbersize attribute and verify it has been set to your desired value. Additional resources nsslapd-maxbersize (Maximum Message Size) in Directory Server 11 documentation 7.7. Adjusting the maximum number of file descriptors A value can be defined for the DefaultLimitNOFILE parameter in the /etc/systemd/system.conf file. An administrator with root privileges can set the DefaultLimitNOFILE parameter for the ns-slapd process to a lower value by using the setrlimit command. This value then takes precedence over what is in /etc/systemd/system.conf and is accepted by the Identity Management (IdM) Directory Server (DS) as the value for the nsslapd-maxdescriptors attribute. The nsslapd-maxdescriptors attribute sets the maximum, platform-dependent number of file descriptors that the IdM LDAP uses. File descriptors are used for client connections, log files, sockets, and other resources. If no value is defined in either /etc/systemd/system.conf or by setrlimit , then IdM DS sets the nsslapd-maxdescriptors attribute to 1048576. If an IdM DS administrator later decides to set a new value for nsslapd-maxdescriptors manually, then IdM DS compares the new value with what is defined locally, by setrlimit or in /etc/systemd/system.conf , with the following result: If the new value for nsslapd-maxdescriptors is higher than what is defined locally, then the server rejects the new value setting and continues to enforce the local limit value as the high watermark value. If the new value is lower than what is defined locally, then the new value will be used. This procedure describes how to set a new value for nsslapd-maxdescriptors . Prerequisites The LDAP Directory Manager password Procedure Retrieve the current value of the nsslapd-maxdescriptors parameter and make a note of it before making any adjustments, in case it needs to be restored. Enter the Directory Manager password when prompted. Modify the value of the nsslapd-maxdescriptors attribute. This example increases the value to 8192 . Authenticate as the Directory Manager to make the configuration change. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust nsslapd-maxdescriptors to a different value, or back to the default of 4096 . Verification Display the value of the nsslapd-maxdescriptors attribute and verify it has been set to your desired value. Additional resources nsslapd-maxdescriptors (Maximum File Descriptors) in Directory Server 12 documentation 7.8. Adjusting the connection backlog size The listen service sets the number of sockets available to receive incoming connections. The nsslapd-listen-backlog-size value sets the maximum length of the queue for the sockfd socket before refusing connections. If your IdM environment handles a large amount of connections, consider increasing the value of nsslapd-listen-backlog-size . Default value 128 queue slots Valid range 0 - 9223372036854775807 Entry DN location cn=config Prerequisites The LDAP Directory Manager password Procedure Retrieve the current value of the nsslapd-listen-backlog-size parameter and make a note of it before making any adjustments, in case it needs to be restored. Enter the Directory Manager password when prompted. Modify the value of the nsslapd-listen-backlog-size attribute. This example increases the value to 192 . Authenticate as the Directory Manager to make the configuration change. Verification Display the value of the nsslapd-listen-backlog-size attribute and verify it has been set to your desired value. Additional resources nsslapd-listen-backlog-size) in Directory Server 11 documentation 7.9. Adjusting the maximum number of database locks Lock mechanisms control how many copies of Directory Server processes can run at the same time, and the nsslapd-db-locks parameter sets the maximum number of locks. Increase the maximum number of locks if if you see the following error messages in the /var/log/dirsrv/slapd- instance_name /errors log file: Default value 50000 locks Valid range 0 - 2147483647 Entry DN location cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config Prerequisites The LDAP Directory Manager password Procedure Retrieve the current value of the nsslapd-db-locks parameter and make a note of it before making any adjustments, in case it needs to be restored. Modify the value of the locks attribute. This example doubles the value to 100000 locks. Authenticate as the Directory Manager to make the configuration change. Restart the Directory Server. Verification Display the value of the nsslapd-db-locks attribute and verify it has been set to your desired value. Additional resources nsslapd-db-locks in Directory Server 11 documentation 7.10. Disabling the Transparent Huge Pages feature Transparent Huge Pages (THP) Linux memory management feature is enabled by default on RHEL. The THP feature can decrease the IdM Directory Server (DS) performance because DS has sparse memory access patterns. How to disable the feature, see Disabling the Transparent Huge Pages feature in Red Hat Directory Server documentation. Additional resources The negative effects of Transparent Huge Pages (THP) on RHDS 7.11. Adjusting the input/output block timeout The nsslapd-ioblocktimeout attribute sets the amount of time in milliseconds after which the connection to a stalled LDAP client is closed. An LDAP client is considered to be stalled when it has not made any I/O progress for read or write operations. Lower the value of the nsslapd-ioblocktimeout attribute to free up connections sooner. Default value 10000 milliseconds Valid range 0 - 2147483647 Entry DN location cn=config Prerequisites The LDAP Directory Manager password Procedure Retrieve the current value of the nsslapd-ioblocktimeout parameter and make a note of it before making any adjustments, in case it needs to be restored. Enter the Directory Manager password when prompted. Modify the value of the nsslapd-ioblocktimeout attribute. This example lowers the value to 8000 . Authenticate as the Directory Manager to make the configuration change. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust nsslapd-ioblocktimeout to a different value, or back to the default of 10000 . Verification Display the value of the nsslapd-ioblocktimeout attribute and verify it has been set to your desired value. Additional resources nsslapd-ioblocktimeout (IO Block Time Out) in Directory Server 11 documentation 7.12. Adjusting the idle connection timeout The nsslapd-idletimeout attribute sets the amount of time in seconds after which an idle LDAP client connection is closed by the IdM server. A value of 0 means that the server never closes idle connections. Red Hat recommends adjusting this value so stale connections are closed, but active connections are not closed prematurely. Default value 3600 seconds (1 hour) Valid range 0 - 2147483647 Entry DN location cn=config Prerequisites The LDAP Directory Manager password Procedure Retrieve the current value of the nsslapd-idletimeout parameter and make a note of it before making any adjustments, in case it needs to be restored. Enter the Directory Manager password when prompted. Modify the value of the nsslapd-idletimeout attribute. This example lowers the value to 1800 (30 minutes). Authenticate as the Directory Manager to make the configuration change. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust nsslapd-idletimeout to a different value, or back to the default of 3600 . Verification Display the value of the nsslapd-idletimeout attribute and verify it has been set to your desired value. Additional resources nsslapd-idletimeout (Default Idle Timeout) in Directory Server 11 documentation 7.13. Adjusting the replication release timeout An IdM replica is exclusively locked during a replication session with another replica. In some environments, a replica is locked for a long time due to large updates or network congestion, which increases replication latency. You can release a replica after a fixed amount of time by adjusting the repl-release-timeout parameter. Red Hat recommends setting this value between 30 and 120 : If the value is set too low, replicas are constantly reacquiring one another and replicas are not able to send larger updates. A longer timeout can improve high-traffic situations where it is best if a server exclusively accesses a replica for longer amounts of time, but a value higher than 120 seconds slows down replication. Default value 60 seconds Valid range 0 - 2147483647 Recommended range 30 - 120 Prerequisites The LDAP Directory Manager password Procedure Display the database suffixes and their corresponding back ends. This command displays the names of the back end databases to their suffix. Use the suffix name in the step. Modify the value of the repl-release-timeout attribute for the main userroot database. This example increases the value to 90 seconds. Authenticate as the Directory Manager to make the configuration change. Optional: If your IdM environment uses the IdM Certificate Authority (CA), you can modify the value of the repl-release-timeout attribute for the CA database. This example increases the value to 90 seconds. Restart the Directory Server. Monitor the IdM directory server's performance. If it does not change in a desirable way, repeat this procedure and adjust repl-release-timeout to a different value, or back to the default of 60 seconds. Verification Display the value of the nsds5ReplicaReleaseTimeout attribute and verify it has been set to your desired value. Note The Distinguished Name of the suffix in this example is dc=example,dc=com , but the equals sign ( = ) and comma ( , ) must be escaped in the ldapsearch command. Convert the suffix DN to cn=dc\3Dexample\2Cdc\3Dcom with the following escape characters: \3D replacing = \2C replacing , Additional resources nsDS5ReplicaReleaseTimeout in Directory Server 11 documentation 7.14. Installing an IdM server or replica with custom database settings from an LDIF file You can install an IdM server and IdM replicas with custom settings for the Directory Server database. The following procedure shows you how to create an LDAP Data Interchange Format (LDIF) file with database settings, and how to pass those settings to the IdM server and replica installation commands. Prerequisites You have determined custom Directory Server settings that improve the performance of your IdM environment. See Adjusting IdM Directory Server performance . Procedure Create a text file in LDIF format with your custom database settings. Separate LDAP attribute modifications with a dash (-). This example sets non-default values for the idle timeout and maximum file descriptors. Use the --dirsrv-config-file parameter to pass the LDIF file to the installation script. To install an IdM server: To install an IdM replica: Additional resources Options for the ipa-server-install and ipa-replica-install commands 7.15. Additional resources Directory Server 11 Performance Tuning Guide
|
[
"REASON: entry too large ( 83886080 bytes) for the import buffer size ( 67108864 bytes). Try increasing nsslapd-cachememsize.",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --cache-autosize=0",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix list cn=changelog (changelog) dc=example,dc=com ( userroot ) o=ipaca (ipaca)",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix set --cache-memsize= 2147483648 userroot",
"systemctl restart dirsrv.target",
"ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn= userroot ,cn=ldbm database,cn=plugins,cn=config\" | grep nsslapd-cachememsize nsslapd-cachememsize: 2147483648",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --cache-autosize=0 --dbcachesize=268435456",
"systemctl restart dirsrv.target",
"ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn=config,cn=ldbm database,cn=plugins,cn=config\" | grep nsslapd-dbcachesize nsslapd-dbcachesize: 2147483648",
"systemctl stop dirsrv.target",
"*cp /etc/dirsrv/ slapd-instance_name /dse.ldif /etc/dirsrv/ slapd-instance_name /dse.ldif.bak.USD(date \"+%F_%H-%M-%S\")",
"nsslapd-cache-autosize: 10",
"nsslapd-cache-autosize-split: 25",
"systemctl start dirsrv.target",
"ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn=config,cn=ldbm database,cn=plugins,cn=config\" | grep nsslapd-cache-autosize nsslapd-cache-autosize: *10 nsslapd-cache-autosize-split: 25",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix list dc=example,dc=com ( userroot )",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix set --dncache-memsize= 20971520 userroot",
"systemctl restart dirsrv.target",
"ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn= userroot ,cn=ldbm database,cn=plugins,cn=config\" | grep nsslapd-dncachememsize nsslapd-dncachememsize: 20971520",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-ndn-cache-enabled Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-ndn-cache-enabled: on",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-ndn-cache-enabled=on Enter password for cn=Directory Manager on ldap://server.example.com: Successfully replaced \"nsslapd-ndn-cache-enabled\"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-ndn-cache-max-size Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-ndn-cache-max-size: 20971520",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-ndn-cache-max-size= 41943040",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-ndn-cache-max-size Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-ndn-cache-max-size: 41943040",
"Incoming BER Element was too long, max allowable is 2097152 bytes. Change the nsslapd-maxbersize attribute in cn=config to increase.",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-maxbersize Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-maxbersize: 2097152",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-maxbersize= 4194304",
"Enter password for cn=Directory Manager on ldap://server.example.com : Successfully replaced \"nsslapd-maxbersize\"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-maxbersize Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-maxbersize: 4194304",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-maxdescriptors Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-maxdescriptors: 4096",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-maxdescriptors= 8192",
"Enter password for cn=Directory Manager on ldap://server.example.com : Successfully replaced \"nsslapd-maxdescriptors\"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-maxdescriptors Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-maxdescriptors: 8192",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-listen-backlog-size Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-listen-backlog-size: 128",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-listen-backlog-size= 192",
"Enter password for cn=Directory Manager on ldap://server.example.com: Successfully replaced \"nsslapd-listen-backlog-size\"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-listen-backlog-size Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-listen-backlog-size: 192",
"libdb: Lock table is out of available locks",
"ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config\" | grep nsslapd-db-locks nsslapd-db-locks: 50000",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend config set --locks= 100000",
"Enter password for cn=Directory Manager on ldap://server.example.com : Successfully updated database configuration",
"systemctl restart dirsrv.target",
"ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn=bdb,cn=config,cn=ldbm database,cn=plugins,cn=config\" | grep nsslapd-db-locks nsslapd-db-locks: 100000",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-ioblocktimeout Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-ioblocktimeout: 10000",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-ioblocktimeout= 8000",
"Enter password for cn=Directory Manager on ldap://server.example.com : Successfully replaced \"nsslapd-ioblocktimeout\"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-ioblocktimeout Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-idletimeout: 8000",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-idletimeout Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-idletimeout: 3600",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config replace nsslapd-idletimeout= 1800",
"Enter password for cn=Directory Manager on ldap://server.example.com : Successfully replaced \"nsslapd-idletimeout\"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com config get nsslapd-idletimeout Enter password for cn=Directory Manager on ldap://server.example.com: nsslapd-idletimeout: 3600",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend suffix list cn=changelog (changelog) dc=example,dc=com (userroot) o=ipaca (ipaca)",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com replication set --suffix=\" dc=example,dc=com \" --repl-release-timeout= 90",
"Enter password for cn=Directory Manager on ldap://server.example.com : Successfully replaced \"repl-release-timeout\"",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com replication set --suffix=\"o=ipaca\" --repl-release-timeout= 90 Enter password for cn=Directory Manager on ldap://server.example.com : Successfully replaced \"repl-release-timeout\"",
"systemctl restart dirsrv.target",
"ldapsearch -D \"cn=directory manager\" -w DirectoryManagerPassword -b \"cn=replica,cn= dc\\3Dexample\\2Cdc\\3Dcom ,cn=mapping tree,cn=config\" | grep nsds5ReplicaReleaseTimeout nsds5ReplicaReleaseTimeout: 90",
"dn: cn=config changetype: modify replace: nsslapd-idletimeout nsslapd-idletimeout: 1800 - replace: nsslapd-maxdescriptors nsslapd-maxdescriptors: 8192",
"ipa-server-install --dirsrv-config-file filename.ldif",
"ipa-replica-install --dirsrv-config-file filename.ldif"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/tuning_performance_in_identity_management/adjusting-idm-directory-server-performance_tuning-performance-in-idm
|
Chapter 3. Performing Additional Configuration on Satellite Server
|
Chapter 3. Performing Additional Configuration on Satellite Server 3.1. Configuring Satellite Server to consume content from a custom CDN If you have an internal Content Delivery Network (CDN) or serve content on an accessible web server, you can configure your Satellite Server to consume Red Hat repositories from this CDN server instead of the Red Hat CDN. A CDN server can be any web server that mirrors repositories in the same directory structure as the Red Hat CDN. You can configure the source of content for each organization. Satellite recognizes automatically which Red Hat repositories from the subscription manifest in your organization are available on your CDN server. Prerequisites You have a CDN server that provides Red Hat content and is accessible by Satellite Server. If your CDN server uses HTTPS, ensure you have uploaded the SSL certificate into Satellite. For more information, see Importing Custom SSL Certificates in Managing content . You have uploaded a manifest to your organization. Procedure In the Satellite web UI, navigate to Content > Subscriptions . Click Manage Manifest . Select the CDN Configuration tab. Select the Custom CDN tab. In the URL field, enter the URL of your CDN server from which you want Satellite Server to consume Red Hat repositories. Optional: In the SSL CA Content Credential , select the SSL certificate of the CDN server. Click Update . You can now enable Red Hat repositories consumed from your internal CDN server. CLI procedure Connect to your Satellite Server using SSH. Set CDN configuration to your custom CDN server: Additional resources Content Delivery Network Structure in Overview, concepts, and deployment considerations 3.2. Configuring Inter-Satellite Synchronization (ISS) Configure Inter-Satellite Synchronization on your disconnected Satellite Server to provide content in your disconnected network. 3.2.1. Inter-Satellite Synchronization scenarios Red Hat Satellite uses Inter-Satellite Synchronization (ISS) to synchronize content between two Satellite Servers including those that are air gapped. You can use ISS in cases such as: If you want to copy some but not all content from your Satellite Server to other Satellite Servers. For example, you have content views that your IT department consumes from Satellite Server, and you want to copy content from those content views to other Satellite Servers. If you want to copy all Library content from your Satellite Server to other Satellite Servers. For example, you have Products and repositories that your IT department consumes from Satellite Server in the Library, and you want to copy all Products and repositories in that organization to other Satellite Servers. Note You cannot use ISS to synchronize content from Satellite Server to Capsule Server. Capsule Server supports synchronization natively. For more information, see Capsule Server Overview in Overview, concepts, and deployment considerations . There are different ways of using ISS. The way you can use depends on your multi-server setup that can fall to one of the following scenarios. 3.2.1.1. ISS network sync in a disconnected scenario In a disconnected scenario, there is the following setup: The upstream Satellite Server is connected to the Internet. This server consumes content from the Red Hat Content Delivery Network (CDN) or custom sources. The downstream Satellite Server is completely isolated from all external networks. The downstream Satellite Server can communicate with a connected upstream Satellite Server over an internal network. Figure 3.1. Satellite ISS disconnected scenario You can configure your downstream Satellite Server to synchronize content from the upstream Satellite Server over the network. 3.2.1.2. ISS export sync in an air-gapped scenario In an air-gapped scenario, there is the following setup: The upstream Satellite Server is connected to the Internet. This server consumes content from the Red Hat CDN or custom sources. The downstream Satellite Server is completely isolated from all external networks. The downstream Satellite Server does not have a network connection to a connected upstream Satellite Server. Figure 3.2. Satellite ISS air-gapped scenario The only way for an air-gapped downstream Satellite Server to receive content updates is by exporting payload from the upstream Satellite Server, bringing it physically to the downstream Satellite Server, and importing the payload. For more information, see Synchronizing Content Between Satellite Servers in Managing content . You can configure your downstream Satellite Server to synchronize content by using exports. 3.2.2. Configuring Satellite Server to synchronize content by using exports If you deployed your downstream Satellite Server as air gapped, configure your Satellite Server as such to avoid attempts to consume content from a network. Procedure In the Satellite web UI, navigate to Content > Subscriptions . Click Manage Manifest . Switch to the CDN Configuration tab. Select the Export Sync tab. Click Update . CLI procedure Log in to your Satellite Server by using SSH. Set CDN configuration to sync by using exports: Additional resources For more information, see Content synchronization by using export and import in Managing content . 3.2.3. Configuring Satellite Server to synchronize content over a network Configure a downstream Satellite Server to synchronize repositories from a connected upstream Satellite Server over HTTPS. Prerequisites A network connection exists between the upstream Satellite Server and the downstream Satellite Server. You imported the subscription manifest on both the upstream and downstream Satellite Server. On the upstream Satellite Server, you enabled the required repositories for the organization. For more information, see Enabling Red Hat Repositories in Managing content . The upstream user is an admin or has the following permissions: view_organizations view_products export_content view_lifecycle_environments view_content_views On the downstream Satellite Server, you have imported the SSL certificate of the upstream Satellite Server using the contents of http:// upstream-satellite.example.com /pub/katello-server-ca.crt . For more information, see Importing SSL Certificates in Managing content . The downstream user is an admin or has the permissions to create product repositories and organizations. Procedure Navigate to Content > Subscriptions . Click Manage Manifest . Navigate to the CDN Configuration tab. Select the Network Sync tab. In the URL field, enter the address of the upstream Satellite Server. In the Username , enter your username for upstream login. In the Password , enter your password or personal access token for upstream login. In the Organization label field, enter the label of the upstream organization. Optional: In the Lifecycle Environment Label field, enter the label of the upstream lifecycle environment. Default is Library . Optional: In the Content view label field, enter the label of the upstream content view. Default is Default_Organization_View . From the SSL CA Content Credential menu, select a CA certificate used by the upstream Satellite Server. Click Update . In the Satellite web UI, navigate to Content > Products . Select the product that contains the repositories that you want to synchronize. From the Select Action menu, select Sync Now to synchronize all repositories within the product. You can also create a synchronization plan to ensure updates on a regular basis. For more information, see Creating a Synchronization Plan in Managing content . CLI procedure Connect to your downstream Satellite Server using SSH. View information about the upstream CA certificate: Note the ID of the CA certificate for the step. Set CDN configuration to an upstream Satellite Server: The default lifecycle environment label is Library . The default content view label is Default_Organization_View . 3.3. Configuring pull-based transport for remote execution By default, remote execution uses push-based SSH as the transport mechanism for the Script provider. If your infrastructure prohibits outgoing connections from Satellite Server to hosts, you can use remote execution with pull-based transport instead, because the host initiates the connection to Satellite Server. The use of pull-based transport is not limited to those infrastructures. The pull-based transport comprises pull-mqtt mode on Capsules in combination with a pull client running on hosts. Note The pull-mqtt mode works only with the Script provider. Ansible and other providers will continue to use their default transport settings. Procedure Enable the pull-based transport on your Satellite Server: Configure the firewall to allow the MQTT service on port 1883: Make the changes persistent: In pull-mqtt mode, hosts subscribe for job notifications to either your Satellite Server or any Capsule Server through which they are registered. Ensure that Satellite Server sends remote execution jobs to that same Satellite Server or Capsule Server: In the Satellite web UI, navigate to Administer > Settings . On the Content tab, set the value of Prefer registered through Capsule for remote execution to Yes . steps Configure your hosts for the pull-based transport. For more information, see Transport modes for remote execution in Managing hosts . 3.4. Enabling power management on hosts To perform power management tasks on hosts using the intelligent platform management interface (IPMI) or a similar protocol, you must enable the baseboard management controller (BMC) module on Satellite Server. Prerequisites All hosts must have a network interface of BMC type. Satellite Server uses this NIC to pass the appropriate credentials to the host. For more information, see Adding a Baseboard Management Controller (BMC) Interface in Managing hosts . Procedure To enable BMC, enter the following command: 3.5. Configuring DNS, DHCP, and TFTP You can manage DNS, DHCP, and TFTP centrally within the Satellite environment, or you can manage them independently after disabling their maintenance on Satellite. You can also run DNS, DHCP, and TFTP externally, outside of the Satellite environment. 3.5.1. Configuring DNS, DHCP, and TFTP on Satellite Server To configure the DNS, DHCP, and TFTP services on Satellite Server, use the satellite-installer command with the options appropriate for your environment. Any changes to the settings require entering the satellite-installer command again. You can enter the command multiple times and each time it updates all configuration files with the changed values. Prerequisites Ensure that the following information is available to you: DHCP IP address ranges DHCP gateway IP address DHCP nameserver IP address DNS information TFTP server name Use the FQDN instead of the IP address where possible in case of network changes. Contact your network administrator to ensure that you have the correct settings. Procedure Enter the satellite-installer command with the options appropriate for your environment. The following example shows configuring full provisioning services: You can monitor the progress of the satellite-installer command displayed in your prompt. You can view the logs in /var/log/foreman-installer/satellite.log . Additional resources For more information about the satellite-installer command, enter satellite-installer --help . 3.5.2. Disabling DNS, DHCP, and TFTP for unmanaged networks If you want to manage TFTP, DHCP, and DNS services manually, you must prevent Satellite from maintaining these services on the operating system and disable orchestration to avoid DHCP and DNS validation errors. However, Satellite does not remove the back-end services on the operating system. Procedure On Satellite Server, enter the following command: In the Satellite web UI, navigate to Infrastructure > Subnets and select a subnet. Click the Capsules tab and clear the DHCP Capsule , TFTP Capsule , and Reverse DNS Capsule fields. In the Satellite web UI, navigate to Infrastructure > Domains and select a domain. Clear the DNS Capsule field. Optional: If you use a DHCP service supplied by a third party, configure your DHCP server to pass the following options: For more information about DHCP options, see RFC 2132 . Note Satellite does not perform orchestration when a Capsule is not set for a given subnet and domain. When enabling or disabling Capsule associations, orchestration commands for existing hosts can fail if the expected records and configuration files are not present. When associating a Capsule to turn orchestration on, ensure the required DHCP and DNS records as well as the TFTP files are in place for the existing Satellite hosts in order to prevent host deletion failures in the future. 3.5.3. Additional resources For more information about configuring DNS, DHCP, and TFTP externally, see Chapter 4, Configuring Satellite Server with external services . For more information about configuring DHCP, DNS, and TFTP services, see Configuring Network Services in Provisioning hosts . 3.6. Configuring Satellite Server for outgoing emails To send email messages from Satellite Server, you can use either an SMTP server, or the sendmail command. Prerequisites Some SMTP servers with anti-spam protection or grey-listing features are known to cause problems. To setup outgoing email with such a service either install and configure a vanilla SMTP service on Satellite Server for relay or use the sendmail command instead. Procedure In the Satellite web UI, navigate to Administer > Settings . Click the Email tab and set the configuration options to match your preferred delivery method. The changes have an immediate effect. The following example shows the configuration options for using an SMTP server: Table 3.1. Using an SMTP server as a delivery method Name Example value Delivery method SMTP SMTP address smtp.example.com SMTP authentication login SMTP HELO/EHLO domain example.com SMTP password password SMTP port 25 SMTP username [email protected] The SMTP username and SMTP password specify the login credentials for the SMTP server. The following example uses gmail.com as an SMTP server: Table 3.2. Using gmail.com as an SMTP server Name Example value Delivery method SMTP SMTP address smtp.gmail.com SMTP authentication plain SMTP HELO/EHLO domain smtp.gmail.com SMTP enable StartTLS auto Yes SMTP password password SMTP port 587 SMTP username user @gmail.com The following example uses the sendmail command as a delivery method: Table 3.3. Using sendmail as a delivery method Name Example value Delivery method Sendmail Sendmail location /usr/sbin/sendmail Sendmail arguments -i For security reasons, both Sendmail location and Sendmail argument settings are read-only and can be only set in /etc/foreman/settings.yaml . Both settings currently cannot be set via satellite-installer . For more information see the sendmail 1 man page. If you decide to send email using an SMTP server which uses TLS authentication, also perform one of the following steps: Mark the CA certificate of the SMTP server as trusted. To do so, execute the following commands on Satellite Server: Where mailca.crt is the CA certificate of the SMTP server. Alternatively, in the Satellite web UI, set the SMTP enable StartTLS auto option to No . Click Test email to send a test message to the user's email address to confirm the configuration is working. If a message fails to send, the Satellite web UI displays an error. See the log at /var/log/foreman/production.log for further details. Additional resources For information on configuring email notifications for individual users or user groups, see Configuring Email Notification Preferences in Administering Red Hat Satellite . 3.7. Configuring Satellite Server with a custom SSL certificate By default, Red Hat Satellite uses a self-signed SSL certificate to enable encrypted communications between Satellite Server, external Capsule Servers, and all hosts. If you cannot use a Satellite self-signed certificate, you can configure Satellite Server to use an SSL certificate signed by an external certificate authority (CA). When you configure Red Hat Satellite with custom SSL certificates, you must fulfill the following requirements: You must use the privacy-enhanced mail (PEM) encoding for the SSL certificates. You must not use the same SSL certificate for both Satellite Server and Capsule Server. The same CA must sign certificates for Satellite Server and Capsule Server. An SSL certificate must not also be a CA certificate. An SSL certificate must include a subject alt name (SAN) entry that matches the common name (CN). An SSL certificate must be allowed for Key Encipherment using a Key Usage extension. An SSL certificate must not have a shortname as the CN. You must not set a passphrase for the private key. To configure your Satellite Server with a custom certificate, complete the following procedures: Section 3.7.1, "Creating a custom SSL certificate for Satellite Server" Section 3.7.2, "Deploying a custom SSL certificate to Satellite Server" Section 3.7.3, "Deploying a custom SSL certificate to hosts" If you have external Capsule Servers registered to Satellite Server, configure them with custom SSL certificates. For more information, see Configuring Capsule Server with a Custom SSL Certificate in Installing Capsule Server . 3.7.1. Creating a custom SSL certificate for Satellite Server Use this procedure to create a custom SSL certificate for Satellite Server. If you already have a custom SSL certificate for Satellite Server, skip this procedure. Procedure To store all the source certificate files, create a directory that is accessible only to the root user: Create a private key with which to sign the certificate signing request (CSR). Note that the private key must be unencrypted. If you use a password-protected private key, remove the private key password. If you already have a private key for this Satellite Server, skip this step. Create the /root/satellite_cert/openssl.cnf configuration file for the CSR and include the following content: Optional: If you want to add Distinguished Name (DN) details to the CSR, add the following information to the [ req_distinguished_name ] section: 1 Two letter code 2 Full name 3 Full name (example: New York) 4 Division responsible for the certificate (example: IT department) Generate CSR: 1 Path to the private key 2 Path to the configuration file 3 Path to the CSR to generate Send the certificate signing request to the certificate authority (CA). The same CA must sign certificates for Satellite Server and Capsule Server. When you submit the request, specify the lifespan of the certificate. The method for sending the certificate request varies, so consult the CA for the preferred method. In response to the request, you can expect to receive a CA bundle and a signed certificate, in separate files. 3.7.2. Deploying a custom SSL certificate to Satellite Server Use this procedure to configure your Satellite Server to use a custom SSL certificate signed by a Certificate Authority. The katello-certs-check command validates the input certificate files and returns the commands necessary to deploy a custom SSL certificate to Satellite Server. Important Do not store the SSL certificates or .tar bundles in /tmp or /var/tmp directory. The operating system removes files from these directories periodically. As a result, satellite-installer fails to execute while enabling features or upgrading Satellite Server. Procedure Validate the custom SSL certificate input files. Note that for the katello-certs-check command to work correctly, Common Name (CN) in the certificate must match the FQDN of Satellite Server. 1 Path to Satellite Server certificate file that is signed by a Certificate Authority. 2 Path to the private key that was used to sign Satellite Server certificate. 3 Path to the Certificate Authority bundle. If the command is successful, it returns two satellite-installer commands, one of which you must use to deploy a certificate to Satellite Server. Example output of katello-certs-check Note that you must not access or modify /root/ssl-build . From the output of the katello-certs-check command, depending on your requirements, enter the satellite-installer command that installs a new Satellite with custom SSL certificates or updates certificates on a currently running Satellite. If you are unsure which command to run, you can verify that Satellite is installed by checking if the file /etc/foreman-installer/scenarios.d/.installed exists. If the file exists, run the second satellite-installer command that updates certificates. Important satellite-installer needs the certificate archive file after you deploy the certificate. Do not modify or delete it. It is required, for example, when upgrading Satellite Server. On a computer with network access to Satellite Server, navigate to the following URL: https://satellite.example.com . In your browser, view the certificate details to verify the deployed certificate. 3.7.3. Deploying a custom SSL certificate to hosts After you configure Satellite to use a custom SSL certificate, you must deploy the certificate to hosts registered to Satellite. Procedure Update the SSL certificate on each host: 3.8. Using external databases with Satellite As part of the installation process for Red Hat Satellite, the satellite-installer command installs PostgreSQL databases on the same server as Satellite. In certain Satellite deployments, using external databases instead of the default local databases can help with the server load. Red Hat does not provide support or tools for external database maintenance. This includes backups, upgrades, and database tuning. You must have your own database administrator to support and maintain external databases. To create and use external databases for Satellite, you must complete the following procedures: Section 3.8.2, "Preparing a host for external databases" . Prepare a Red Hat Enterprise Linux 8 server to host the external databases. Section 3.8.3, "Installing PostgreSQL" . Prepare PostgreSQL with databases for Satellite, Candlepin and Pulp with dedicated users owning them. Section 3.8.4, "Configuring Satellite Server to use external databases" . Edit the parameters of satellite-installer to point to the new databases, and run satellite-installer . 3.8.1. PostgreSQL as an external database considerations Foreman, Katello, and Candlepin use the PostgreSQL database. If you want to use PostgreSQL as an external database, the following information can help you decide if this option is right for your Satellite configuration. Satellite supports PostgreSQL version 12. Advantages of external PostgreSQL Increase in free memory and free CPU on Satellite Flexibility to set shared_buffers on the PostgreSQL database to a high number without the risk of interfering with other services on Satellite Flexibility to tune the PostgreSQL server's system without adversely affecting Satellite operations Disadvantages of external PostgreSQL Increase in deployment complexity that can make troubleshooting more difficult The external PostgreSQL server is an additional system to patch and maintain If either Satellite or the PostgreSQL database server suffers a hardware or storage failure, Satellite is not operational If there is latency between the Satellite server and database server, performance can suffer If you suspect that the PostgreSQL database on your Satellite is causing performance problems, use the information in Satellite 6: How to enable postgres query logging to detect slow running queries to determine if you have slow queries. Queries that take longer than one second are typically caused by performance issues with large installations, and moving to an external database might not help. If you have slow queries, contact Red Hat Support. 3.8.2. Preparing a host for external databases Install a freshly provisioned system with the latest Red Hat Enterprise Linux 8 to host the external databases. Subscriptions for Red Hat Enterprise Linux do not provide the correct service level agreement for using Satellite with external databases. You must also attach a Satellite subscription to the base operating system that you want to use for the external databases. Prerequisites The prepared host must meet Satellite's Storage Requirements . Procedure Use the instructions in Attaching the Satellite Infrastructure Subscription to attach a Satellite subscription to your server. Disable all repositories and enable only the following repositories: Enable the following module: Note Enablement of the module satellite:el8 warns about a conflict with postgresql:10 and ruby:2.5 as these modules are set to the default module versions on Red Hat Enterprise Linux 8. The module satellite:el8 has a dependency for the modules postgresql:12 and ruby:2.7 that will be enabled with the satellite:el8 module. These warnings do not cause installation process failure, hence can be ignored safely. For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux Application Streams Lifecycle . 3.8.3. Installing PostgreSQL You can install only the same version of PostgreSQL that is installed with the satellite-installer tool during an internal database installation. Satellite supports PostgreSQL version 12. Procedure To install PostgreSQL, enter the following command: To initialize PostgreSQL, enter the following command: Edit the /var/lib/pgsql/data/postgresql.conf file: Note that the default configuration of external PostgreSQL needs to be adjusted to work with Satellite. The base recommended external database configuration adjustments are as follows: checkpoint_completion_target: 0.9 max_connections: 500 shared_buffers: 512MB work_mem: 4MB Remove the # and edit to listen to inbound connections: Edit the /var/lib/pgsql/data/pg_hba.conf file: Add the following line to the file: To start, and enable PostgreSQL service, enter the following commands: Open the postgresql port on the external PostgreSQL server: Make the changes persistent: Switch to the postgres user and start the PostgreSQL client: Create three databases and dedicated roles: one for Satellite, one for Candlepin, and one for Pulp: Connect to the Pulp database: Create the hstore extension: Exit the postgres user: From Satellite Server, test that you can access the database. If the connection succeeds, the commands return 1 . 3.8.4. Configuring Satellite Server to use external databases Use the satellite-installer command to configure Satellite to connect to an external PostgreSQL database. Prerequisites You have installed and configured a PostgreSQL database on a Red Hat Enterprise Linux server. Procedure To configure the external databases for Satellite, enter the following command: To enable the Secure Sockets Layer (SSL) protocol for these external databases, add the following options:
|
[
"hammer organization configure-cdn --name=\" My_Organization \" --type=custom_cdn --url https:// my-cdn.example.com --ssl-ca-credential-id \" My_CDN_CA_Cert_ID \"",
"hammer organization configure-cdn --name=\" My_Organization \" --type=export_sync",
"hammer content-credential show --name=\" My_Upstream_CA_Cert \" --organization=\" My_Downstream_Organization \"",
"hammer organization configure-cdn --name=\" My_Downstream_Organization \" --type=network_sync --url https:// upstream-satellite.example.com --username upstream_username --password upstream_password --ssl-ca-credential-id \" My_Upstream_CA_Cert_ID\" \\ --upstream-organization-label=\"_My_Upstream_Organization \" [--upstream-lifecycle-environment-label=\" My_Lifecycle_Environment \"] [--upstream-content-view-label=\" My_Content_View \"]",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=pull-mqtt",
"firewall-cmd --add-service=mqtt",
"firewall-cmd --runtime-to-permanent",
"satellite-installer --foreman-proxy-bmc \"true\" --foreman-proxy-bmc-default-provider \"freeipmi\"",
"satellite-installer --foreman-proxy-dns true --foreman-proxy-dns-managed true --foreman-proxy-dns-zone example.com --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-range \" 192.0.2.100 192.0.2.150 \" --foreman-proxy-dhcp-gateway 192.0.2.1 --foreman-proxy-dhcp-nameservers 192.0.2.2 --foreman-proxy-tftp true --foreman-proxy-tftp-managed true --foreman-proxy-tftp-servername 192.0.2.3",
"satellite-installer --foreman-proxy-dhcp false --foreman-proxy-dns false --foreman-proxy-tftp false",
"Option 66: IP address of Satellite or Capsule Option 67: /pxelinux.0",
"cp mailca.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust enable update-ca-trust",
"mkdir /root/satellite_cert",
"openssl genrsa -out /root/satellite_cert/satellite_cert_key.pem 4096",
"[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] commonName = satellite.example.com [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [ alt_names ] DNS.1 = satellite.example.com",
"[req_distinguished_name] CN = satellite.example.com countryName = My_Country_Name 1 stateOrProvinceName = My_State_Or_Province_Name 2 localityName = My_Locality_Name 3 organizationName = My_Organization_Or_Company_Name organizationalUnitName = My_Organizational_Unit_Name 4",
"openssl req -new -key /root/satellite_cert/satellite_cert_key.pem \\ 1 -config /root/satellite_cert/openssl.cnf \\ 2 -out /root/satellite_cert/satellite_cert_csr.pem 3",
"katello-certs-check -c /root/satellite_cert/satellite_cert.pem \\ 1 -k /root/satellite_cert/satellite_cert_key.pem \\ 2 -b /root/satellite_cert/ca_cert_bundle.pem 3",
"Validation succeeded. To install the Red Hat Satellite Server with the custom certificates, run: satellite-installer --scenario satellite --certs-server-cert \" /root/satellite_cert/satellite_cert.pem \" --certs-server-key \" /root/satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \" /root/satellite_cert/ca_cert_bundle.pem \" To update the certificates on a currently running Red Hat Satellite installation, run: satellite-installer --scenario satellite --certs-server-cert \" /root/satellite_cert/satellite_cert.pem \" --certs-server-key \" /root/satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \" /root/satellite_cert/ca_cert_bundle.pem \" --certs-update-server --certs-update-server-ca",
"dnf install http:// satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm",
"subscription-manager repos --disable '*' subscription-manager repos --enable=satellite-6.15-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.15-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"dnf module enable satellite:el8",
"dnf install postgresql-server postgresql-evr postgresql-contrib",
"postgresql-setup initdb",
"vi /var/lib/pgsql/data/postgresql.conf",
"listen_addresses = '*'",
"vi /var/lib/pgsql/data/pg_hba.conf",
"host all all Satellite_ip /32 md5",
"systemctl enable --now postgresql",
"firewall-cmd --add-service=postgresql",
"firewall-cmd --runtime-to-permanent",
"su - postgres -c psql",
"CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;",
"postgres=# \\c pulpcore You are now connected to database \"pulpcore\" as user \"postgres\".",
"pulpcore=# CREATE EXTENSION IF NOT EXISTS \"hstore\"; CREATE EXTENSION",
"\\q",
"PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"",
"satellite-installer --foreman-db-database foreman --foreman-db-host postgres.example.com --foreman-db-manage false --foreman-db-password Foreman_Password --foreman-proxy-content-pulpcore-manage-postgresql false --foreman-proxy-content-pulpcore-postgresql-db-name pulpcore --foreman-proxy-content-pulpcore-postgresql-host postgres.example.com --foreman-proxy-content-pulpcore-postgresql-password Pulpcore_Password --foreman-proxy-content-pulpcore-postgresql-user pulp --katello-candlepin-db-host postgres.example.com --katello-candlepin-db-name candlepin --katello-candlepin-db-password Candlepin_Password --katello-candlepin-manage-db false",
"--foreman-db-root-cert <path_to_CA> --foreman-db-sslmode verify-full --foreman-proxy-content-pulpcore-postgresql-ssl true --foreman-proxy-content-pulpcore-postgresql-ssl-root-ca <path_to_CA> --katello-candlepin-db-ssl true --katello-candlepin-db-ssl-ca <path_to_CA> --katello-candlepin-db-ssl-verify true"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_disconnected_network_environment/performing-additional-configuration
|
Chapter 3. Cloning Satellite Server
|
Chapter 3. Cloning Satellite Server You can clone your Satellite Server to create instances to test upgrades and migration of instances to a different machine or operating system. This is an optional step to provide more flexibility during the upgrade or migration. You cannot use the Satellite clone tool on a Capsule Server. Instead, you must backup the existing Capsule Server, restore it on the target server, and then reconfigure Capsule Server. Note If you create a new instance of the Satellite Server, decommission the old instances after restoring the backup. Cloned instances are not supposed to run in parallel in a production environment. Terminology Ensure that you understand the following terms: Source server The origin of the clone. Target server The new server that you copy files to and clone the source server to. 3.1. Cloning process overview Back up the source server. Clone the source server to the target server. Power off the source server. Update the network configuration on the target server to match the target server's IP address with its new host name. Test the new target server. 3.2. Prerequisites To clone Satellite Server, ensure that you have the following resources available: A minimal install of Red Hat Enterprise Linux 8 to become the target server. Do not install Red Hat Enterprise Linux 8 software groups or third-party applications. Ensure that your server complies with all the required specifications. For more information, see Preparing your Environment for Installation in Installing Satellite Server in a connected network environment . A backup of your Satellite Server that you make using the satellite-maintain backup script. You can use a backup with or without Pulp data. A Satellite subscription for the target server. Before you begin cloning, ensure the following conditions exist: The target server is on an isolated network. This avoids unwanted communication with Capsule Servers and hosts. The target server has at least the same storage capacity as the source server. Customized configuration files If you have any customized configurations on your source server that are not managed by the satellite-installer tool or Satellite backup process, you must manually back up these files. 3.3. Pulp data considerations You can clone Satellite server without including Pulp data. However, for your cloned environment to work, you do require Pulp data. If the target server does not have Pulp data, it is not a fully working Satellite. To transfer Pulp data to a target server, you have two options: Clone using backup with Pulp data Clone using backup without Pulp data and copy /var/lib/pulp manually from the source server. If your pulp_data.tar file is greater than 500 GB, or if you use a slow storage system, such as NFS, and your pulp_data.tar file is greater than 100 GB, do not include pulp_data.tar in the backup because this can cause memory errors during extraction. Copy the pulp_data.tar file from the source server to the target server. To back up without Pulp data Follow the steps in the procedure in Section 3.4, "Cloning Satellite Server" and replace the steps that involve cloning with Pulp data with the following steps: Perform a backup with PostgreSQL databases active excluding the Pulp data: Stop and disable Satellite services: Copy the Pulp data to the target server: Proceed to Section 3.4.2, "Cloning to the target server" . 3.4. Cloning Satellite Server Use the following procedures to clone Satellite Server. Note that because of the high volume of data that you must copy and transfer as part of these procedures, it can take a significant amount of time to complete. 3.4.1. Preparing the source server for cloning On the source server, complete the following steps: Determine the size of the Pulp data: If you have less than 500 GB of Pulp data, perform a backup with PostgreSQL databases active including the Pulp data. If you have more than 500 GB of Pulp data, skip the following steps and complete the steps in Section 3.3, "Pulp data considerations" before you continue. Stop and disable Satellite services: Proceed to Section 3.4.2, "Cloning to the target server" . 3.4.2. Cloning to the target server To clone your server, complete the following steps on your target server: The satellite-clone tool defaults to using /backup/ as the backup folder. If you copy to a different folder, update the backup_dir variable in the /etc/satellite-clone/satellite-clone-vars.yml file. Place the backup files from the source Satellite in the /backup/ folder on the target server. You can either mount the shared storage or copy the backup files to the /backup/ folder on the target server. Power off the source server. Register your instance to the Red Hat Customer Portal and enable only the required repositories: Install the satellite-clone package: After you install the satellite-clone tool, you can adjust any configuration to suit your own deployment in the /etc/satellite-clone/satellite-clone-vars.yml file. Run the satellite-clone tool: Reconfigure DHCP, DNS, TFTP, and remote execution services. The cloning process disables these services on the target Satellite Server to avoid conflict with the source Satellite Server. Reconfigure and enable DHCP, DNS, and TFTP in the Satellite web UI. For more information, see Configuring External Services on Satellite Server in Installing Satellite Server in a connected network environment . Log in to the Satellite web UI, with the username admin and the password changeme . Immediately update the admin password to secure credentials. Ensure that the correct organization is selected. In the Satellite web UI, navigate to Content > Subscriptions . Click Manage Manifest . Click Refresh and then click Close to return to the list of subscriptions. Verify that the available subscriptions are correct. Follow the instructions in the /usr/share/satellite-clone/logs/reassociate_capsules.txt file to restore the associations between Capsules and their lifecycle environments. Update your network configuration, for example, DNS, to match the target server's IP address with its new host name. The satellite-clone tool changes the host name to the source server's host name. If you want to change the host name to something different, you can use the satellite-change-hostname tool. For more information, see Renaming Satellite Server in Administering Red Hat Satellite . If the source server uses the virt-who daemon, install and configure it on the target server. Copy all the virt-who configuration files in the /etc/virt-who.d/ directory from the source server to the same directory on the target server. For more information, see Configuring virtual machine subscriptions . After you perform an upgrade using the following chapters, you can safely decommission the source server.
|
[
"satellite-maintain backup offline --skip-pulp-content --assumeyes /var/backup",
"satellite-maintain service stop satellite-maintain service disable",
"rsync --archive --partial --progress --compress /var/lib/pulp/ target_server.example.com:/var/lib/pulp/",
"du -sh /var/lib/pulp/",
"satellite-maintain backup offline --assumeyes /var/backup",
"satellite-maintain service stop satellite-maintain service disable",
"subscription-manager register your_customer_portal_credentials subscription-manager repos --disable=* subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=satellite-maintenance-6.15-for-rhel-8-x86_64-rpms dnf module enable satellite-maintenance:el8",
"dnf install satellite-clone",
"satellite-clone"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/cloning_satellite_server
|
30.3. Configuring the Location for Looking up sudo Policies
|
30.3. Configuring the Location for Looking up sudo Policies The centralized IdM database for sudo configuration makes the sudo policies defined in IdM globally available to all domain hosts. On Red Hat Enterprise Linux 7.1 systems and later, the ipa-server-install and ipa-client-install utilities automatically configure the system to use the IdM-defined policies by setting SSSD as the data provider for sudo . The location for looking up the sudo policies is defined on the sudoers line of the /etc/nsswitch.conf file. On IdM systems running Red Hat Enterprise Linux 7.1 and later, the default sudoers configuration in nsswitch.conf is: The files option specifies that the system uses the sudo configuration defined in the /etc/sudoers local SSSD configuration file. The sss option specifies that the sudo configuration defined in IdM is used. 30.3.1. Configuring Hosts to Use IdM sudo Policies in Earlier Versions of IdM To implement the IdM-defined sudo policies on IdM systems running Red Hat Enterprise Linux versions earlier than 7.1, configure the local machines manually. You can do this using SSSD or LDAP. Red Hat strongly recommends to use the SSSD-based configuration. 30.3.1.1. Applying the sudo Policies to Hosts Using SSSD Follow these steps on each system that is required to use SSSD for sudo rules: Configure sudo to look to SSSD for the sudoers file. Leaving the files option in place allows sudo to check its local configuration before checking SSSD for the IdM configuration. Add sudo to the list of services managed by the local SSSD client. Set a name for the NIS domain in the sudo configuration. sudo uses NIS-style netgroups, so the NIS domain name must be set in the system configuration for sudo to be able to find the host groups used in the IdM sudo configuration. Enable the rhel-domainname service if it is not already enabled to ensure that the NIS domain name will be persistent across reboots. Set the NIS domain name to use with the sudo rules. Configure the system authentication settings to persist the NIS domain name. For example: This updates the /etc/sysconfig/network and /etc/yp.conf files with the NIS domain. Restart the rhel-domainname service: Optionally, enable debugging in SSSD to show what LDAP settings it is using. The LDAP search base used by SSSD for operations is recorded in the sssd_ DOMAINNAME .log log. 30.3.1.2. Applying the sudo Policies to Hosts Using LDAP Important Only use the LDAP-based configuration for clients that do not use SSSD. Red Hat recommends to configure all other clients using the SSSD-based configuration, as described in Section 30.3.1.1, "Applying the sudo Policies to Hosts Using SSSD" . For information on applying sudo policies using LDAP, see the Applying the sudo Policies to Hosts Using LDAP in the Red Hat Enterprise Linux 6 Identity Management Guide . The LDAP-based configuration is expected to be used primarily for clients based on Red Hat Enterprise Linux versions earlier than Red Hat Enterprise Linux 7. It is therefore only described in the documentation for Red Hat Enterprise Linux 6.
|
[
"sudoers: files sss",
"vim /etc/nsswitch.conf sudoers: files sss",
"vim /etc/sssd/sssd.conf [sssd] config_file_version = 2 services = nss, pam, sudo domains = IPADOMAIN",
"systemctl enable rhel-domainname.service",
"nisdomainname example.com",
"echo \"NISDOMAIN= example.com \" >> /etc/sysconfig/network",
"systemctl restart rhel-domainname.service",
"[domain/ IPADOMAIN ] debug_level = 6 ."
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/sudo-configuration-database
|
Chapter 29. Adding and removing tracepoints from a running perf collector without stopping or restarting perf
|
Chapter 29. Adding and removing tracepoints from a running perf collector without stopping or restarting perf By using the control pipe interface to enable and disable different tracepoints in a running perf collector, you can dynamically adjust what data you are collecting without having to stop or restart perf . This ensures you do not lose performance data that would have otherwise been recorded during the stopping or restarting process. 29.1. Adding tracepoints to a running perf collector without stopping or restarting perf Add tracepoints to a running perf collector using the control pipe interface to adjust the data you are recording without having to stop perf and losing performance data. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Configure the control pipe interface: Run perf record with the control file setup and events you are interested in enabling: In this example, declaring 'sched:*' after the -e option starts perf record with scheduler events. In a second terminal, start the read side of the control pipe: Starting the read side of the control pipe triggers the following message in the first terminal: In a third terminal, enable a tracepoint using the control file: This command triggers perf to scan the current event list in the control file for the declared event. If the event is present, the tracepoint is enabled and the following message appears in the first terminal: Once the tracepoint is enabled, the second terminal displays the output from perf detecting the tracepoint: 29.2. Removing tracepoints from a running perf collector without stopping or restarting perf Remove tracepoints from a running perf collector using the control pipe interface to reduce the scope of data you are collecting without having to stop perf and losing performance data. Prerequisites You have the perf user space tool installed as described in Installing perf . You have added tracepoints to a running perf collector via the control pipe interface. For more information, see Adding tracepoints to a running perf collector without stopping or restarting perf . Procedure Remove the tracepoint: Note This example assumes you have previously loaded scheduler events into the control file and enabled the tracepoint sched:sched_process_fork . This command triggers perf to scan the current event list in the control file for the declared event. If the event is present, the tracepoint is disabled and the following message appears in the terminal used to configure the control pipe:
|
[
"mkfifo control ack perf.pipe",
"perf record --control=fifo:control,ack -D -1 --no-buffering -e ' sched:* ' -o - > perf.pipe",
"cat perf.pipe | perf --no-pager script -i -",
"Events disabled",
"echo 'enable sched:sched_process_fork ' > control",
"event sched:sched_process_fork enabled",
"bash 33349 [034] 149587.674295: sched:sched_process_fork: comm=bash pid=33349 child_comm=bash child_pid=34056",
"echo 'disable sched:sched_process_fork ' > control",
"event sched:sched_process_fork disabled"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/turning-tracepoints-on-and-off-without-stopping-or-restarting-perf_monitoring-and-managing-system-status-and-performance
|
Chapter 11. Configuring the cluster network range
|
Chapter 11. Configuring the cluster network range As a cluster administrator, you can expand the cluster network range after cluster installation. You might want to expand the cluster network range if you need more IP addresses for additional nodes. For example, if you deployed a cluster and specified 10.128.0.0/19 as the cluster network range and a host prefix of 23 , you are limited to 16 nodes. You can expand that to 510 nodes by changing the CIDR mask on a cluster to /14 . When expanding the cluster network address range, your cluster must use the OVN-Kubernetes network plugin . Other network plugins are not supported. The following limitations apply when modifying the cluster network IP address range: The CIDR mask size specified must always be smaller than the currently configured CIDR mask size, because you can only increase IP space by adding more nodes to an installed cluster The host prefix cannot be modified Pods that are configured with an overridden default gateway must be recreated after the cluster network expands 11.1. Expanding the cluster network IP address range You can expand the IP address range for the cluster network. Because this change requires rolling out a new Operator configuration across the cluster, it can take up to 30 minutes to take effect. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Ensure that the cluster uses the OVN-Kubernetes network plugin. Procedure To obtain the cluster network range and host prefix for your cluster, enter the following command: USD oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.clusterNetwork}" Example output [{"cidr":"10.217.0.0/22","hostPrefix":23}] To expand the cluster network IP address range, enter the following command. Use the CIDR IP address range and host prefix returned from the output of the command. USD oc patch Network.config.openshift.io cluster --type='merge' --patch \ '{ "spec":{ "clusterNetwork": [ {"cidr":"<network>/<cidr>","hostPrefix":<prefix>} ], "networkType": "OVNKubernetes" } }' where: <network> Specifies the network part of the cidr field that you obtained from the step. You cannot change this value. <cidr> Specifies the network prefix length. For example, 14 . Change this value to a smaller number than the value from the output in the step to expand the cluster network range. <prefix> Specifies the current host prefix for your cluster. This value must be the same value for the hostPrefix field that you obtained from the step. Example command USD oc patch Network.config.openshift.io cluster --type='merge' --patch \ '{ "spec":{ "clusterNetwork": [ {"cidr":"10.217.0.0/14","hostPrefix": 23} ], "networkType": "OVNKubernetes" } }' Example output network.config.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take up to 30 minutes for this change to take effect. USD oc get network.operator.openshift.io \ -o jsonpath="{.items[0].spec.clusterNetwork}" Example output [{"cidr":"10.217.0.0/14","hostPrefix":23}] 11.2. Additional resources Red Hat OpenShift Network Calculator About the OVN-Kubernetes network plugin
|
[
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.clusterNetwork}\"",
"[{\"cidr\":\"10.217.0.0/22\",\"hostPrefix\":23}]",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\":{ \"clusterNetwork\": [ {\"cidr\":\"<network>/<cidr>\",\"hostPrefix\":<prefix>} ], \"networkType\": \"OVNKubernetes\" } }'",
"oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\":{ \"clusterNetwork\": [ {\"cidr\":\"10.217.0.0/14\",\"hostPrefix\": 23} ], \"networkType\": \"OVNKubernetes\" } }'",
"network.config.openshift.io/cluster patched",
"oc get network.operator.openshift.io -o jsonpath=\"{.items[0].spec.clusterNetwork}\"",
"[{\"cidr\":\"10.217.0.0/14\",\"hostPrefix\":23}]"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/networking/configuring-cluster-network-range
|
Appendix A. Capsule Server scalability considerations
|
Appendix A. Capsule Server scalability considerations The maximum number of Capsule Servers that Satellite Server can support has no fixed limit. It was tested that a Satellite Server can support 17 Capsule Servers with 2 vCPUs. However, scalability is highly variable, especially when managing Puppet clients. Capsule Server scalability when managing Puppet clients depends on the number of CPUs, the run-interval distribution, and the number of Puppet managed resources. Capsule Server has a limitation of 100 concurrent Puppet agents running at any single point in time. Running more than 100 concurrent Puppet agents results in a 503 HTTP error. For example, assuming that Puppet agent runs are evenly distributed with less than 100 concurrent Puppet agents running at any single point during a run-interval, a Capsule Server with 4 CPUs has a maximum of 1250 - 1600 Puppet clients with a moderate workload of 10 Puppet classes assigned to each Puppet client. Depending on the number of Puppet clients required, the Satellite installation can scale out the number of Capsule Servers to support them. If you want to scale your Capsule Server when managing Puppet clients, the following assumptions are made: There are no external Puppet clients reporting directly to the Satellite integrated Capsule. All other Puppet clients report directly to an external Capsule. There is an evenly distributed run-interval of all Puppet agents. Note Deviating from the even distribution increases the risk of overloading Satellite Server. The limit of 100 concurrent requests applies. The following table describes the scalability limits using the recommended 4 CPUs. Table A.1. Puppet scalability using 4 CPUs Puppet Managed Resources per Host Run-Interval Distribution 1 3000 - 2500 10 2400 - 2000 20 1700 - 1400 The following table describes the scalability limits using the minimum 2 CPUs. Table A.2. Puppet scalability using 2 CPUs Puppet Managed Resources per Host Run-Interval Distribution 1 1700 - 1450 10 1500 - 1250 20 850 - 700
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_capsule_server/capsule-server-scalability-considerations_capsule
|
Chapter 7. Troubleshooting
|
Chapter 7. Troubleshooting 7.1. Troubleshooting installations 7.1.1. Determining where installation issues occur When troubleshooting OpenShift Container Platform installation issues, you can monitor installation logs to determine at which stage issues occur. Then, retrieve diagnostic data relevant to that stage. OpenShift Container Platform installation proceeds through the following stages: Ignition configuration files are created. The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. The control plane machines use the bootstrap machine to form an etcd cluster. The bootstrap machine starts a temporary Kubernetes control plane using the new etcd cluster. The temporary control plane schedules the production control plane to the control plane machines. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine adds OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. The control plane sets up the worker nodes. The control plane installs additional services in the form of a set of Operators. The cluster downloads and configures remaining components needed for the day-to-day operation, including the creation of worker machines in supported environments. 7.1.2. User-provisioned infrastructure installation considerations The default installation method uses installer-provisioned infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. You can alternatively install OpenShift Container Platform 4.13 on infrastructure that you provide. If you use this installation method, follow user-provisioned infrastructure installation documentation carefully. Additionally, review the following considerations before the installation: Check the Red Hat Enterprise Linux (RHEL) Ecosystem to determine the level of Red Hat Enterprise Linux CoreOS (RHCOS) support provided for your chosen server hardware or virtualization technology. Many virtualization and cloud environments require agents to be installed on guest operating systems. Ensure that these agents are installed as a containerized workload deployed through a daemon set. Install cloud provider integration if you want to enable features such as dynamic storage, on-demand service routing, node hostname to Kubernetes hostname resolution, and cluster autoscaling. Note It is not possible to enable cloud provider integration in OpenShift Container Platform environments that mix resources from different cloud providers, or that span multiple physical or virtual platforms. The node life cycle controller will not allow nodes that are external to the existing provider to be added to a cluster, and it is not possible to specify more than one cloud provider integration. A provider-specific Machine API implementation is required if you want to use machine sets or autoscaling to automatically provision OpenShift Container Platform cluster nodes. Check whether your chosen cloud provider offers a method to inject Ignition configuration files into hosts as part of their initial deployment. If they do not, you will need to host Ignition configuration files by using an HTTP server. The steps taken to troubleshoot Ignition configuration file issues will differ depending on which of these two methods is deployed. Storage needs to be manually provisioned if you want to leverage optional framework components such as the embedded container registry, Elasticsearch, or Prometheus. Default storage classes are not defined in user-provisioned infrastructure installations unless explicitly configured. A load balancer is required to distribute API requests across all control plane nodes in highly available OpenShift Container Platform environments. You can use any TCP-based load balancing solution that meets OpenShift Container Platform DNS routing and port requirements. 7.1.3. Checking a load balancer configuration before OpenShift Container Platform installation Check your load balancer configuration prior to starting an OpenShift Container Platform installation. Prerequisites You have configured an external load balancer of your choosing, in preparation for an OpenShift Container Platform installation. The following example is based on a Red Hat Enterprise Linux (RHEL) host using HAProxy to provide load balancing services to a cluster. You have configured DNS in preparation for an OpenShift Container Platform installation. You have SSH access to your load balancer. Procedure Check that the haproxy systemd service is active: USD ssh <user_name>@<load_balancer> systemctl status haproxy Verify that the load balancer is listening on the required ports. The following example references ports 80 , 443 , 6443 , and 22623 . For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 6, verify port status by using the netstat command: USD ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623' For HAProxy instances running on Red Hat Enterprise Linux (RHEL) 7 or 8, verify port status by using the ss command: USD ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623' Note Red Hat recommends the ss command instead of netstat in Red Hat Enterprise Linux (RHEL) 7 or later. ss is provided by the iproute package. For more information on the ss command, see the Red Hat Enterprise Linux (RHEL) 7 Performance Tuning Guide . Check that the wildcard DNS record resolves to the load balancer: USD dig <wildcard_fqdn> @<dns_server> 7.1.4. Specifying OpenShift Container Platform installer log levels By default, the OpenShift Container Platform installer log level is set to info . If more detailed logging is required when diagnosing a failed OpenShift Container Platform installation, you can increase the openshift-install log level to debug when starting the installation again. Prerequisites You have access to the installation host. Procedure Set the installation log level to debug when initiating the installation: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1 1 Possible log levels include info , warn , error, and debug . 7.1.5. Troubleshooting openshift-install command issues If you experience issues running the openshift-install command, check the following: The installation has been initiated within 24 hours of Ignition configuration file creation. The Ignition files are created when the following command is run: USD ./openshift-install create ignition-configs --dir=./install_dir The install-config.yaml file is in the same directory as the installer. If an alternative installation path is declared by using the ./openshift-install --dir option, verify that the install-config.yaml file exists within that directory. 7.1.6. Monitoring installation progress You can monitor high-level installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. This provides greater visibility into how an installation progresses and helps identify the stage at which an installation failure occurs. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the bootstrap and control plane nodes. Note The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host. Procedure Watch the installation log as the installation progresses: USD tail -f ~/<installation_directory>/.openshift_install.log Monitor the bootkube.service journald unit log on the bootstrap node, after it has booted. This provides visibility into the bootstrapping of the first control plane. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity. Monitor the logs using oc : USD oc adm node-logs --role=master -u kubelet If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service Monitor crio.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity. Monitor the logs using oc : USD oc adm node-logs --role=master -u crio If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh [email protected]_name.sub_domain.domain journalctl -b -f -u crio.service 7.1.7. Gathering bootstrap node diagnostic data When experiencing bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node. Prerequisites You have SSH access to your bootstrap node. You have the fully qualified domain name of the bootstrap node. If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server's fully qualified domain name and the port number. You must also have SSH access to the HTTP host. Procedure If you have access to the bootstrap node's console, monitor the console until the node reaches the login prompt. Verify the Ignition file configuration. If you are hosting Ignition configuration files by using an HTTP server. Verify the bootstrap node Ignition file URL. Replace <http_server_fqdn> with HTTP server's fully qualified domain name: USD curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1 1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found . To verify that the Ignition file was received by the bootstrap node, query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files, enter the following command: USD grep -is 'bootstrap.ign' /var/log/httpd/access_log If the bootstrap Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded. If the Ignition file was not received, check that the Ignition files exist and that they have the appropriate file and web server permissions on the serving host directly. If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment. Review the bootstrap node's console to determine if the mechanism is injecting the bootstrap node Ignition file correctly. Verify the availability of the bootstrap node's assigned storage device. Verify that the bootstrap node has been assigned an IP address from the DHCP server. Collect bootkube.service journald unit logs from the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. Collect logs from the bootstrap node containers. Collect the logs using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done' If the bootstrap process fails, verify the following. You can resolve api.<cluster_name>.<base_domain> from the installation host. The load balancer proxies port 6443 connections to bootstrap and control plane nodes. Ensure that the proxy configuration meets OpenShift Container Platform installation requirements. 7.1.8. Investigating control plane node installation issues If you experience control plane node installation issues, determine the control plane node OpenShift Container Platform software defined network (SDN), and network Operator status. Collect kubelet.service , crio.service journald unit logs, and control plane node container logs for visibility into control plane node agent, CRI-O container runtime, and pod activity. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the bootstrap and control plane nodes. If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server's fully qualified domain name and the port number. You must also have SSH access to the HTTP host. Note The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host. Procedure If you have access to the console for the control plane node, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console. Verify Ignition file configuration. If you are hosting Ignition configuration files by using an HTTP server. Verify the control plane node Ignition file URL. Replace <http_server_fqdn> with HTTP server's fully qualified domain name: USD curl -I http://<http_server_fqdn>:<port>/master.ign 1 1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found . To verify that the Ignition file was received by the control plane node query the HTTP server logs on the serving host. For example, if you are using an Apache web server to serve Ignition files: USD grep -is 'master.ign' /var/log/httpd/access_log If the master Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded. If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place. If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment. Review the console for the control plane node to determine if the mechanism is injecting the control plane node Ignition file correctly. Check the availability of the storage device assigned to the control plane node. Verify that the control plane node has been assigned an IP address from the DHCP server. Determine control plane node status. Query control plane node status: USD oc get nodes If one of the control plane nodes does not reach a Ready status, retrieve a detailed node description: USD oc describe node <master_node> Note It is not possible to run oc commands if an installation issue prevents the OpenShift Container Platform API from running or if the kubelet is not running yet on each node: Determine OpenShift Container Platform SDN status. Review sdn-controller , sdn , and ovs daemon set status, in the openshift-sdn namespace: USD oc get daemonsets -n openshift-sdn If those resources are listed as Not found , review pods in the openshift-sdn namespace: USD oc get pods -n openshift-sdn Review logs relating to failed OpenShift Container Platform SDN pods in the openshift-sdn namespace: USD oc logs <sdn_pod> -n openshift-sdn Determine cluster network configuration status. Review whether the cluster's network configuration exists: USD oc get network.config.openshift.io cluster -o yaml If the installer failed to create the network configuration, generate the Kubernetes manifests again and review message output: USD ./openshift-install create manifests Review the pod status in the openshift-network-operator namespace to determine whether the Cluster Network Operator (CNO) is running: USD oc get pods -n openshift-network-operator Gather network Operator pod logs from the openshift-network-operator namespace: USD oc logs pod/<network_operator_pod_name> -n openshift-network-operator Monitor kubelet.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node agent activity. Retrieve the logs using oc : USD oc adm node-logs --role=master -u kubelet If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service Note OpenShift Container Platform 4.13 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . Retrieve crio.service journald unit logs on control plane nodes, after they have booted. This provides visibility into control plane node CRI-O container runtime activity. Retrieve the logs using oc : USD oc adm node-logs --role=master -u crio If the API is not functional, review the logs using SSH instead: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service Collect logs from specific subdirectories under /var/log/ on control plane nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver/audit.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log Review control plane node container logs using SSH. List the containers: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a Retrieve a container's logs using crictl : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> If you experience control plane node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity. Test whether the MCO endpoint is available. Replace <cluster_name> with appropriate values: USD curl https://api-int.<cluster_name>:22623/config/master If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623. Verify that the MCO endpoint's DNS record is configured and resolves to the load balancer. Run a DNS lookup for the defined MCO endpoint name: USD dig api-int.<cluster_name> @<dns_server> Run a reverse lookup to the assigned MCO IP address on the load balancer: USD dig -x <load_balancer_mco_ip_address> @<dns_server> Verify that the MCO is functioning from the bootstrap node directly. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node's system clock reference time and time synchronization statistics: USD ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking Review certificate validity: USD openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text 7.1.9. Investigating etcd installation issues If you experience etcd issues during installation, you can check etcd pod status and collect etcd pod logs. You can also verify etcd DNS records and check DNS availability on control plane nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the control plane nodes. Procedure Check the status of etcd pods. Review the status of pods in the openshift-etcd namespace: USD oc get pods -n openshift-etcd Review the status of pods in the openshift-etcd-operator namespace: USD oc get pods -n openshift-etcd-operator If any of the pods listed by the commands are not showing a Running or a Completed status, gather diagnostic information for the pod. Review events for the pod: USD oc describe pod/<pod_name> -n <namespace> Inspect the pod's logs: USD oc logs pod/<pod_name> -n <namespace> If the pod has more than one container, the preceding command will create an error, and the container names will be provided in the error message. Inspect logs for each container: USD oc logs pod/<pod_name> -c <container_name> -n <namespace> If the API is not functional, review etcd pod and container logs on each control plane node by using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values. List etcd pods on each control plane node: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd- For any pods not showing Ready status, inspect pod status in detail. Replace <pod_id> with the pod's ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id> List containers related to a pod: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>' For any containers not showing Ready status, inspect container status in detail. Replace <container_id> with container IDs listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id> Review the logs for any containers not showing a Ready status. Replace <container_id> with the container IDs listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> Note OpenShift Container Platform 4.13 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . Validate primary and secondary DNS server connectivity from control plane nodes. 7.1.10. Investigating control plane node kubelet and API server issues To investigate control plane node kubelet and API server issues during installation, check DNS, DHCP, and load balancer functionality. Also, verify that certificates have not expired. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the control plane nodes. Procedure Verify that the API server's DNS record directs the kubelet on control plane nodes to https://api-int.<cluster_name>.<base_domain>:6443 . Ensure that the record references the load balancer. Ensure that the load balancer's port 6443 definition references each control plane node. Check that unique control plane node hostnames have been provided by DHCP. Inspect the kubelet.service journald unit logs on each control plane node. Retrieve the logs using oc : USD oc adm node-logs --role=master -u kubelet If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service Note OpenShift Container Platform 4.13 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . Check for certificate expiration messages in the control plane node kubelet logs. Retrieve the log using oc : USD oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired' If the API is not functional, review the logs using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired' 7.1.11. Investigating worker node installation issues If you experience worker node installation issues, you can review the worker node status. Collect kubelet.service , crio.service journald unit logs and the worker node container logs for visibility into the worker node agent, CRI-O container runtime and pod activity. Additionally, you can check the Ignition file and Machine API Operator functionality. If worker node postinstallation configuration fails, check Machine Config Operator (MCO) and DNS functionality. You can also verify system clock synchronization between the bootstrap, master, and worker nodes, and validate certificates. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. You have the fully qualified domain names of the bootstrap and worker nodes. If you are hosting Ignition configuration files by using an HTTP server, you must have the HTTP server's fully qualified domain name and the port number. You must also have SSH access to the HTTP host. Note The initial kubeadmin password can be found in <install_directory>/auth/kubeadmin-password on the installation host. Procedure If you have access to the worker node's console, monitor the console until the node reaches the login prompt. During the installation, Ignition log messages are output to the console. Verify Ignition file configuration. If you are hosting Ignition configuration files by using an HTTP server. Verify the worker node Ignition file URL. Replace <http_server_fqdn> with HTTP server's fully qualified domain name: USD curl -I http://<http_server_fqdn>:<port>/worker.ign 1 1 The -I option returns the header only. If the Ignition file is available on the specified URL, the command returns 200 OK status. If it is not available, the command returns 404 file not found . To verify that the Ignition file was received by the worker node, query the HTTP server logs on the HTTP host. For example, if you are using an Apache web server to serve Ignition files: USD grep -is 'worker.ign' /var/log/httpd/access_log If the worker Ignition file is received, the associated HTTP GET log message will include a 200 OK success status, indicating that the request succeeded. If the Ignition file was not received, check that it exists on the serving host directly. Ensure that the appropriate file and web server permissions are in place. If you are using a cloud provider mechanism to inject Ignition configuration files into hosts as part of their initial deployment. Review the worker node's console to determine if the mechanism is injecting the worker node Ignition file correctly. Check the availability of the worker node's assigned storage device. Verify that the worker node has been assigned an IP address from the DHCP server. Determine worker node status. Query node status: USD oc get nodes Retrieve a detailed node description for any worker nodes not showing a Ready status: USD oc describe node <worker_node> Note It is not possible to run oc commands if an installation issue prevents the OpenShift Container Platform API from running or if the kubelet is not running yet on each node. Unlike control plane nodes, worker nodes are deployed and scaled using the Machine API Operator. Check the status of the Machine API Operator. Review Machine API Operator pod status: USD oc get pods -n openshift-machine-api If the Machine API Operator pod does not have a Ready status, detail the pod's events: USD oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api Inspect machine-api-operator container logs. The container runs within the machine-api-operator pod: USD oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator Also inspect kube-rbac-proxy container logs. The container also runs within the machine-api-operator pod: USD oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy Monitor kubelet.service journald unit logs on worker nodes, after they have booted. This provides visibility into worker node agent activity. Retrieve the logs using oc : USD oc adm node-logs --role=worker -u kubelet If the API is not functional, review the logs using SSH instead. Replace <worker-node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service Note OpenShift Container Platform 4.13 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . Retrieve crio.service journald unit logs on worker nodes, after they have booted. This provides visibility into worker node CRI-O container runtime activity. Retrieve the logs using oc : USD oc adm node-logs --role=worker -u crio If the API is not functional, review the logs using SSH instead: USD ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service Collect logs from specific subdirectories under /var/log/ on worker nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/sssd/ on all worker nodes: USD oc adm node-logs --role=worker --path=sssd Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/sssd/sssd.log contents from all worker nodes: USD oc adm node-logs --role=worker --path=sssd/sssd.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/sssd/sssd.log : USD ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log Review worker node container logs using SSH. List the containers: USD ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a Retrieve a container's logs using crictl : USD ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> If you experience worker node configuration issues, verify that the MCO, MCO endpoint, and DNS record are functioning. The Machine Config Operator (MCO) manages operating system configuration during the installation procedure. Also verify system clock accuracy and certificate validity. Test whether the MCO endpoint is available. Replace <cluster_name> with appropriate values: USD curl https://api-int.<cluster_name>:22623/config/worker If the endpoint is unresponsive, verify load balancer configuration. Ensure that the endpoint is configured to run on port 22623. Verify that the MCO endpoint's DNS record is configured and resolves to the load balancer. Run a DNS lookup for the defined MCO endpoint name: USD dig api-int.<cluster_name> @<dns_server> Run a reverse lookup to the assigned MCO IP address on the load balancer: USD dig -x <load_balancer_mco_ip_address> @<dns_server> Verify that the MCO is functioning from the bootstrap node directly. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker System clock time must be synchronized between bootstrap, master, and worker nodes. Check each node's system clock reference time and time synchronization statistics: USD ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking Review certificate validity: USD openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text 7.1.12. Querying Operator status after installation You can check Operator status at the end of an installation. Retrieve diagnostic data for Operators that do not become available. Review logs for any Operator pods that are listed as Pending or have an error status. Validate base images used by problematic pods. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Check that cluster Operators are all available at the end of an installation. USD oc get clusteroperators Verify that all of the required certificate signing requests (CSRs) are approved. Some nodes might not move to a Ready status and some cluster Operators might not become available if there are pending CSRs. Check the status of the CSRs and ensure that you see a client and server request with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... 1 A client request CSR. 2 A server request CSR. In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After you approve the initial CSRs, the subsequent node client CSRs are automatically approved by the cluster kube-controller-manager . Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve View Operator events: USD oc describe clusteroperator <operator_name> Review Operator pod status within the Operator's namespace: USD oc get pods -n <operator_namespace> Obtain a detailed description for pods that do not have Running status: USD oc describe pod/<operator_pod_name> -n <operator_namespace> Inspect pod logs: USD oc logs pod/<operator_pod_name> -n <operator_namespace> When experiencing pod base image related issues, review base image status. Obtain details of the base image used by a problematic pod: USD oc get pod -o "jsonpath={range .status.containerStatuses[*]}{.name}{'\t'}{.state}{'\t'}{.image}{'\n'}{end}" <operator_pod_name> -n <operator_namespace> List base image release information: USD oc adm release info <image_path>:<tag> --commits 7.1.13. Gathering logs from a failed installation If you gave an SSH key to your installation program, you can gather data about your failed installation. Note You use a different command to gather logs about an unsuccessful installation than to gather logs from a running cluster. If you must gather logs from a running cluster, use the oc adm must-gather command. Prerequisites Your OpenShift Container Platform installation failed before the bootstrap process finished. The bootstrap node is running and accessible through SSH. The ssh-agent process is active on your computer, and you provided the same SSH key to both the ssh-agent process and the installation program. If you tried to install a cluster on infrastructure that you provisioned, you must have the fully qualified domain names of the bootstrap and control plane nodes. Procedure Generate the commands that are required to obtain the installation logs from the bootstrap and control plane machines: If you used installer-provisioned infrastructure, change to the directory that contains the installation program and run the following command: USD ./openshift-install gather bootstrap --dir <installation_directory> 1 1 installation_directory is the directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. For installer-provisioned infrastructure, the installation program stores information about the cluster, so you do not specify the hostnames or IP addresses. If you used infrastructure that you provisioned yourself, change to the directory that contains the installation program and run the following command: USD ./openshift-install gather bootstrap --dir <installation_directory> \ 1 --bootstrap <bootstrap_address> \ 2 --master <master_1_address> \ 3 --master <master_2_address> \ 4 --master <master_3_address>" 5 1 For installation_directory , specify the same directory you specified when you ran ./openshift-install create cluster . This directory contains the OpenShift Container Platform definition files that the installation program creates. 2 <bootstrap_address> is the fully qualified domain name or IP address of the cluster's bootstrap machine. 3 4 5 For each control plane, or master, machine in your cluster, replace <master_*_address> with its fully qualified domain name or IP address. Note A default cluster contains three control plane machines. List all of your control plane machines as shown, no matter how many your cluster uses. Example output INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here "<installation_directory>/log-bundle-<timestamp>.tar.gz" If you open a Red Hat support case about your installation failure, include the compressed logs in the case. 7.1.14. Additional resources See Installation process for more details on OpenShift Container Platform installation types and process. 7.2. Verifying node health 7.2.1. Reviewing node status, resource usage, and configuration Review cluster node health status, resource consumption statistics, and node logs. Additionally, query kubelet status on individual nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the name, status, and role for all nodes in the cluster: USD oc get nodes Summarize CPU and memory usage for each node within the cluster: USD oc adm top nodes Summarize CPU and memory usage for a specific node: USD oc adm top node my-node 7.2.2. Querying the kubelet's status on a node You can review cluster node health status, resource consumption statistics, and node logs. Additionally, you can query kubelet status on individual nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure The kubelet is managed using a systemd service on each node. Review the kubelet's status by querying the kubelet systemd service within a debug pod. Start a debug pod for a node: USD oc debug node/my-node Note If you are running oc debug on a control plane node, you can find administrative kubeconfig files in the /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs directory. Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.13 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Check whether the kubelet systemd service is active on the node: # systemctl is-active kubelet Output a more detailed kubelet.service status summary: # systemctl status kubelet 7.2.3. Querying cluster node journal logs You can gather journald unit logs and other logs within /var/log on individual cluster nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. Procedure Query kubelet journald unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes only: USD oc adm node-logs --role=master -u kubelet 1 1 Replace kubelet as appropriate to query other unit logs. Collect logs from specific subdirectories under /var/log/ on cluster nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver/audit.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log Note OpenShift Container Platform 4.13 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 7.3. Troubleshooting CRI-O container runtime issues 7.3.1. About CRI-O container runtime engine CRI-O is a Kubernetes-native container engine implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. The CRI-O container engine runs as a systemd service on each OpenShift Container Platform cluster node. When container runtime issues occur, verify the status of the crio systemd service on each node. Gather CRI-O journald unit logs from nodes that have container runtime issues. 7.3.2. Verifying CRI-O runtime engine status You can verify CRI-O container runtime engine status on each cluster node. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Review CRI-O status by querying the crio systemd service on a node, within a debug pod. Start a debug pod for a node: USD oc debug node/my-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.13 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Check whether the crio systemd service is active on the node: # systemctl is-active crio Output a more detailed crio.service status summary: # systemctl status crio.service 7.3.3. Gathering CRI-O journald unit logs If you experience CRI-O issues, you can obtain CRI-O journald unit logs from a node. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). You have the fully qualified domain names of the control plane or control plane machines. Procedure Gather CRI-O journald unit logs. The following example collects logs from all control plane nodes (within the cluster: USD oc adm node-logs --role=master -u crio Gather CRI-O journald unit logs from a specific node: USD oc adm node-logs <node_name> -u crio If the API is not functional, review the logs using SSH instead. Replace <node>.<cluster_name>.<base_domain> with appropriate values: USD ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service Note OpenShift Container Platform 4.13 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 7.3.4. Cleaning CRI-O storage You can manually clear the CRI-O ephemeral storage if you experience the following issues: A node cannot run any pods and this error appears: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory You cannot create a new container on a working node and the "can't stat lower layer" error appears: can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks. Your node is in the NotReady state after a cluster upgrade or if you attempt to reboot it. The container runtime implementation ( crio ) is not working properly. You are unable to start a debug shell on the node using oc debug node/<node_name> because the container runtime instance ( crio ) is not working. Follow this process to completely wipe the CRI-O storage and resolve the errors. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Use cordon on the node. This is to avoid any workload getting scheduled if the node gets into the Ready status. You will know that scheduling is disabled when SchedulingDisabled is in your Status section: USD oc adm cordon <node_name> Drain the node as the cluster-admin user: USD oc adm drain <node_name> --ignore-daemonsets --delete-emptydir-data Note The terminationGracePeriodSeconds attribute of a pod or pod template controls the graceful termination period. This attribute defaults at 30 seconds, but can be customized for each application as necessary. If set to more than 90 seconds, the pod might be marked as SIGKILLed and fail to terminate successfully. When the node returns, connect back to the node via SSH or Console. Then connect to the root user: USD ssh [email protected] USD sudo -i Manually stop the kubelet: # systemctl stop kubelet Stop the containers and pods: Use the following command to stop the pods that are not in the HostNetwork . They must be removed first because their removal relies on the networking plugin pods, which are in the HostNetwork . .. for pod in USD(crictl pods -q); do if [[ "USD(crictl inspectp USDpod | jq -r .status.linux.namespaces.options.network)" != "NODE" ]]; then crictl rmp -f USDpod; fi; done Stop all other pods: # crictl rmp -fa Manually stop the crio services: # systemctl stop crio After you run those commands, you can completely wipe the ephemeral storage: # crio wipe -f Start the crio and kubelet service: # systemctl start crio # systemctl start kubelet You will know if the clean up worked if the crio and kubelet services are started, and the node is in the Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.26.0 Mark the node schedulable. You will know that the scheduling is enabled when SchedulingDisabled is no longer in status: USD oc adm uncordon <node_name> Example output NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.26.0 7.4. Troubleshooting operating system issues OpenShift Container Platform runs on RHCOS. You can follow these procedures to troubleshoot problems related to the operating system. 7.4.1. Investigating kernel crashes The kdump service, included in the kexec-tools package, provides a crash-dumping mechanism. You can use this service to save the contents of a system's memory for later analysis. The x86_64 architecture supports kdump in General Availability (GA) status, whereas other architectures support kdump in Technology Preview (TP) status. The following table provides details about the support level of kdump for different architectures. Table 7.1. Kdump support in RHCOS Architecture Support level x86_64 GA aarch64 TP s390x TP ppc64le TP Important Kdump support, for the preceding three architectures in the table, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.4.1.1. Enabling kdump RHCOS ships with the kexec-tools package, but manual configuration is required to enable the kdump service. Procedure Perform the following steps to enable kdump on RHCOS. To reserve memory for the crash kernel during the first kernel booting, provide kernel arguments by entering the following command: # rpm-ostree kargs --append='crashkernel=256M' Note For the ppc64le platform, the recommended value for crashkernel is crashkernel=2G-4G:384M,4G-16G:512M,16G-64G:1G,64G-128G:2G,128G-:4G . Optional: To write the crash dump over the network or to some other location, rather than to the default local /var/crash location, edit the /etc/kdump.conf configuration file. Note If your node uses LUKS-encrypted devices, you must use network dumps as kdump does not support saving crash dumps to LUKS-encrypted devices. For details on configuring the kdump service, see the comments in /etc/sysconfig/kdump , /etc/kdump.conf , and the kdump.conf manual page. Also refer to the RHEL kdump documentation for further information on configuring the dump target. Important If you have multipathing enabled on your primary disk, the dump target must be either an NFS or SSH server and you must exclude the multipath module from your /etc/kdump.conf configuration file. Enable the kdump systemd service. # systemctl enable kdump.service Reboot your system. # systemctl reboot Ensure that kdump has loaded a crash kernel by checking that the kdump.service systemd service has started and exited successfully and that the command, cat /sys/kernel/kexec_crash_loaded , prints the value 1 . 7.4.1.2. Enabling kdump on day-1 The kdump service is intended to be enabled per node to debug kernel problems. Because there are costs to having kdump enabled, and these costs accumulate with each additional kdump-enabled node, it is recommended that the kdump service only be enabled on each node as needed. Potential costs of enabling the kdump service on each node include: Less available RAM due to memory being reserved for the crash kernel. Node unavailability while the kernel is dumping the core. Additional storage space being used to store the crash dumps. If you are aware of the downsides and trade-offs of having the kdump service enabled, it is possible to enable kdump in a cluster-wide fashion. Although machine-specific machine configs are not yet supported, you can use a systemd unit in a MachineConfig object as a day-1 customization and have kdump enabled on all nodes in the cluster. You can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. Note See "Customizing nodes" in the Installing Installation configuration section for more information and examples on how to use Ignition configs. Procedure Create a MachineConfig object for cluster-wide configuration: Create a Butane config file, 99-worker-kdump.bu , that configures and enables kdump: variant: openshift version: 4.13.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE="hugepages hugepagesz slub_debug quiet log_buf_len swiotlb" KDUMP_COMMANDLINE_APPEND="irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable" 6 KEXEC_ARGS="-s" KDUMP_IMG="vmlinuz" systemd: units: - name: kdump.service enabled: true 1 2 Replace worker with master in both locations when creating a MachineConfig object for control plane nodes. 3 Provide kernel arguments to reserve memory for the crash kernel. You can add other kernel arguments if necessary. For the ppc64le platform, the recommended value for crashkernel is crashkernel=2G-4G:384M,4G-16G:512M,16G-64G:1G,64G-128G:2G,128G-:4G . 4 If you want to change the contents of /etc/kdump.conf from the default, include this section and modify the inline subsection accordingly. 5 If you want to change the contents of /etc/sysconfig/kdump from the default, include this section and modify the inline subsection accordingly. 6 For the ppc64le platform, replace nr_cpus=1 with maxcpus=1 , which is not supported on this platform. Note To export the dumps to NFS targets, the nfs kernel module must be explicitly added to the configuration file: Example /etc/kdump.conf file nfs server.example.com:/export/cores core_collector makedumpfile -l --message-level 7 -d 31 extra_modules nfs Use Butane to generate a machine config YAML file, 99-worker-kdump.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-kdump.bu -o 99-worker-kdump.yaml Put the YAML file into the <installation_directory>/manifests/ directory during cluster setup. You can also create this MachineConfig object after cluster setup with the YAML file: USD oc create -f 99-worker-kdump.yaml 7.4.1.3. Testing the kdump configuration See the Testing the kdump configuration section in the RHEL documentation for kdump. 7.4.1.4. Analyzing a core dump See the Analyzing a core dump section in the RHEL documentation for kdump. Note It is recommended to perform vmcore analysis on a separate RHEL system. Additional resources Setting up kdump in RHEL Linux kernel documentation for kdump kdump.conf(5) - a manual page for the /etc/kdump.conf configuration file containing the full documentation of available options kexec(8) - a manual page for the kexec package Red Hat Knowledgebase article regarding kexec and kdump 7.4.2. Debugging Ignition failures If a machine cannot be provisioned, Ignition fails and RHCOS will boot into the emergency shell. Use the following procedure to get debugging information. Procedure Run the following command to show which service units failed: USD systemctl --failed Optional: Run the following command on an individual service unit to find out more information: USD journalctl -u <unit>.service 7.5. Troubleshooting network issues 7.5.1. How the network interface is selected For installations on bare metal or with virtual machines that have more than one network interface controller (NIC), the NIC that OpenShift Container Platform uses for communication with the Kubernetes API server is determined by the nodeip-configuration.service service unit that is run by systemd when the node boots. The nodeip-configuration.service selects the IP from the interface associated with the default route. After the nodeip-configuration.service service determines the correct NIC, the service creates the /etc/systemd/system/kubelet.service.d/20-nodenet.conf file. The 20-nodenet.conf file sets the KUBELET_NODE_IP environment variable to the IP address that the service selected. When the kubelet service starts, it reads the value of the environment variable from the 20-nodenet.conf file and sets the IP address as the value of the --node-ip kubelet command-line argument. As a result, the kubelet service uses the selected IP address as the node IP address. If hardware or networking is reconfigured after installation, or if there is a networking layout where the node IP should not come from the default route interface, it is possible for the nodeip-configuration.service service to select a different NIC after a reboot. In some cases, you might be able to detect that a different NIC is selected by reviewing the INTERNAL-IP column in the output from the oc get nodes -o wide command. If network communication is disrupted or misconfigured because a different NIC is selected, you might receive the following error: EtcdCertSignerControllerDegraded . You can create a hint file that includes the NODEIP_HINT variable to override the default IP selection logic. For more information, see Optional: Overriding the default node IP selection logic. 7.5.1.1. Optional: Overriding the default node IP selection logic To override the default IP selection logic, you can create a hint file that includes the NODEIP_HINT variable to override the default IP selection logic. Creating a hint file allows you to select a specific node IP address from the interface in the subnet of the IP address specified in the NODEIP_HINT variable. For example, if a node has two interfaces, eth0 with an address of 10.0.0.10/24 , and eth1 with an address of 192.0.2.5/24 , and the default route points to eth0 ( 10.0.0.10 ),the node IP address would normally use the 10.0.0.10 IP address. Users can configure the NODEIP_HINT variable to point at a known IP in the subnet, for example, a subnet gateway such as 192.0.2.1 so that the other subnet, 192.0.2.0/24 , is selected. As a result, the 192.0.2.5 IP address on eth1 is used for the node. The following procedure shows how to override the default node IP selection logic. Procedure Add a hint file to your /etc/default/nodeip-configuration file, for example: NODEIP_HINT=192.0.2.1 Important Do not use the exact IP address of a node as a hint, for example, 192.0.2.5 . Using the exact IP address of a node causes the node using the hint IP address to fail to configure correctly. The IP address in the hint file is only used to determine the correct subnet. It will not receive traffic as a result of appearing in the hint file. Generate the base-64 encoded content by running the following command: USD echo -n 'NODEIP_HINT=192.0.2.1' | base64 -w0 Example output Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx== Activate the hint by creating a machine config manifest for both master and worker roles before deploying the cluster: 99-nodeip-hint-master.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-nodeip-hint-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration 1 Replace <encoded_contents> with the base64-encoded content of the /etc/default/nodeip-configuration file, for example, Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx== . Note that a space is not acceptable after the comma and before the encoded content. 99-nodeip-hint-worker.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-nodeip-hint-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration 1 Replace <encoded_contents> with the base64-encoded content of the /etc/default/nodeip-configuration file, for example, Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx== . Note that a space is not acceptable after the comma and before the encoded content. Save the manifest to the directory where you store your cluster configuration, for example, ~/clusterconfigs . Deploy the cluster. 7.5.2. Troubleshooting Open vSwitch issues To troubleshoot some Open vSwitch (OVS) issues, you might need to configure the log level to include more information. If you modify the log level on a node temporarily, be aware that you can receive log messages from the machine config daemon on the node like the following example: E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit] To avoid the log messages related to the mismatch, revert the log level change after you complete your troubleshooting. 7.5.2.1. Configuring the Open vSwitch log level temporarily For short-term troubleshooting, you can configure the Open vSwitch (OVS) log level temporarily. The following procedure does not require rebooting the node. In addition, the configuration change does not persist whenever you reboot the node. After you perform this procedure to change the log level, you can receive log messages from the machine config daemon that indicate a content mismatch for the ovs-vswitchd.service . To avoid the log messages, repeat this procedure and set the log level to the original value. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Start a debug pod for a node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell. The debug pod mounts the root file system from the host in /host within the pod. By changing the root directory to /host , you can run binaries from the host file system: # chroot /host View the current syslog level for OVS modules: # ovs-appctl vlog/list The following example output shows the log level for syslog set to info . Example output console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO ... Specify the log level in the /etc/systemd/system/ovs-vswitchd.service.d/10-ovs-vswitchd-restart.conf file: Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg In the preceding example, the log level is set to dbg . Change the last two lines by setting syslog:<log_level> to off , emer , err , warn , info , or dbg . The off log level filters out all log messages. Restart the service: # systemctl daemon-reload # systemctl restart ovs-vswitchd 7.5.2.2. Configuring the Open vSwitch log level permanently For long-term changes to the Open vSwitch (OVS) log level, you can change the log level permanently. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a file, such as 99-change-ovs-loglevel.yaml , with a MachineConfig object like the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service 1 After you perform this procedure to configure control plane nodes, repeat the procedure and set the role to worker to configure worker nodes. 2 Set the syslog:<log_level> value. Log levels are off , emer , err , warn , info , or dbg . Setting the value to off filters out all log messages. Apply the machine config: USD oc apply -f 99-change-ovs-loglevel.yaml Additional resources Understanding the Machine Config Operator Checking machine config pool status 7.5.2.3. Displaying Open vSwitch logs Use the following procedure to display Open vSwitch (OVS) logs. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Run one of the following commands: Display the logs by using the oc command from outside the cluster: USD oc adm node-logs <node_name> -u ovs-vswitchd Display the logs after logging on to a node in the cluster: # journalctl -b -f -u ovs-vswitchd.service One way to log on to a node is by using the oc debug node/<node_name> command. 7.6. Troubleshooting Operator issues Operators are a method of packaging, deploying, and managing an OpenShift Container Platform application. They act like an extension of the software vendor's engineering team, watching over an OpenShift Container Platform environment and using its current state to make decisions in real time. Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, such as skipping a software backup process to save time. OpenShift Container Platform 4.13 includes a default set of Operators that are required for proper functioning of the cluster. These default Operators are managed by the Cluster Version Operator (CVO). As a cluster administrator, you can install application Operators from the OperatorHub using the OpenShift Container Platform web console or the CLI. You can then subscribe the Operator to one or more namespaces to make it available for developers on your cluster. Application Operators are managed by Operator Lifecycle Manager (OLM). If you experience Operator issues, verify Operator subscription status. Check Operator pod health across the cluster and gather Operator logs for diagnosis. 7.6.1. Operator subscription condition types Subscriptions can report the following condition types: Table 7.2. Subscription condition types Condition Description CatalogSourcesUnhealthy Some or all of the catalog sources to be used in resolution are unhealthy. InstallPlanMissing An install plan for a subscription is missing. InstallPlanPending An install plan for a subscription is pending installation. InstallPlanFailed An install plan for a subscription has failed. ResolutionFailed The dependency resolution for a subscription has failed. Note Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. Additional resources Catalog health requirements 7.6.2. Viewing Operator subscription status by using the CLI You can view Operator subscription status by using the CLI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List Operator subscriptions: USD oc get subs -n <operator_namespace> Use the oc describe command to inspect a Subscription resource: USD oc describe sub <subscription_name> -n <operator_namespace> In the command output, find the Conditions section for the status of Operator subscription condition types. In the following example, the CatalogSourcesUnhealthy condition type has a status of false because all available catalog sources are healthy: Example output Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription # ... Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy # ... Note Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object. 7.6.3. Viewing Operator catalog source status by using the CLI You can view the status of an Operator catalog source by using the CLI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the catalog sources in a namespace. For example, you can check the openshift-marketplace namespace, which is used for cluster-wide catalog sources: USD oc get catalogsources -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m Use the oc describe command to get more details and status about a catalog source: USD oc describe catalogsource example-catalog -n openshift-marketplace Example output Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource # ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace # ... In the preceding example output, the last observed state is TRANSIENT_FAILURE . This state indicates that there is a problem establishing a connection for the catalog source. List the pods in the namespace where your catalog source was created: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m When a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the example-catalog-bwt8z pod is ImagePullBackOff . This status indicates that there is an issue pulling the catalog source's index image. Use the oc describe command to inspect a pod for more detailed information: USD oc describe pod example-catalog-bwt8z -n openshift-marketplace Example output Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull In the preceding example output, the error messages indicate that the catalog source's index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials. Additional resources Operator Lifecycle Manager concepts and resources Catalog source gRPC documentation: States of Connectivity Accessing images for Operators from private registries 7.6.4. Querying Operator pod status You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure List Operators running in the cluster. The output includes Operator version, availability, and up-time information: USD oc get clusteroperators List Operator pods running in the Operator's namespace, plus pod status, restarts, and age: USD oc get pod -n <operator_namespace> Output a detailed Operator pod summary: USD oc describe pod <operator_pod_name> -n <operator_namespace> If an Operator issue is node-specific, query Operator container status on that node. Start a debug pod for the node: USD oc debug node/my-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.13 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. List details about the node's containers, including state and associated pod IDs: # crictl ps List information about a specific Operator container on the node. The following example lists information about the network-operator container: # crictl ps --name network-operator Exit from the debug shell. 7.6.5. Gathering Operator logs If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). You have the fully qualified domain names of the control plane or control plane machines. Procedure List the Operator pods that are running in the Operator's namespace, plus the pod status, restarts, and age: USD oc get pods -n <operator_namespace> Review logs for an Operator pod: USD oc logs pod/<pod_name> -n <operator_namespace> If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container: USD oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace> If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace <master-node>.<cluster_name>.<base_domain> with appropriate values. List pods on each control plane node: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods For any Operator pods not showing a Ready status, inspect the pod's status in detail. Replace <operator_pod_id> with the Operator pod's ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id> List containers related to an Operator pod: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id> For any Operator container not showing a Ready status, inspect the container's status in detail. Replace <container_id> with a container ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id> Review the logs for any Operator containers not showing a Ready status. Replace <container_id> with a container ID listed in the output of the preceding command: USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id> Note OpenShift Container Platform 4.13 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 7.6.6. Disabling the Machine Config Operator from automatically rebooting When configuration changes are made by the Machine Config Operator (MCO), Red Hat Enterprise Linux CoreOS (RHCOS) must reboot for the changes to take effect. Whether the configuration change is automatic or manual, an RHCOS node reboots automatically unless it is paused. Note The following modifications do not trigger a node reboot: When the MCO detects any of the following changes, it applies the update without draining or rebooting the node: Changes to the SSH key in the spec.config.passwd.users.sshAuthorizedKeys parameter of a machine config. Changes to the global pull secret or pull secret in the openshift-config namespace. Automatic rotation of the /etc/kubernetes/kubelet-ca.crt certificate authority (CA) by the Kubernetes API Server Operator. When the MCO detects changes to the /etc/containers/registries.conf file, such as adding or editing an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object, it drains the corresponding nodes, applies the changes, and uncordons the nodes. The node drain does not happen for the following changes: The addition of a registry with the pull-from-mirror = "digest-only" parameter set for each mirror. The addition of a mirror with the pull-from-mirror = "digest-only" parameter set in a registry. The addition of items to the unqualified-search-registries list. To avoid unwanted disruptions, you can modify the machine config pool (MCP) to prevent automatic rebooting after the Operator makes changes to the machine config. Note Pausing an MCP prevents the MCO from applying any configuration changes on the associated nodes. Pausing an MCP also prevents any automatically rotated certificates from being pushed to the associated nodes, including the automatic rotation of the kube-apiserver-to-kubelet-signer CA certificate. If the MCP is paused when the kube-apiserver-to-kubelet-signer CA certificate expires, and the MCO attempts to renew the certificate automatically, the MCO cannot push the newly rotated certificates to those nodes. This causes the cluster to become degraded and causes failure in multiple oc commands, including oc debug , oc logs , oc exec , and oc attach . You receive alerts in the Alerting UI of the OpenShift Container Platform web console if an MCP is paused when the certificates are rotated. Pausing an MCP should be done with careful consideration about the kube-apiserver-to-kubelet-signer CA certificate expiration and for short periods of time only. New CA certificates are generated at 292 days from the installation date and removed at 365 days from that date. To determine the automatic CA certificate rotation, see the Understand CA cert auto renewal in Red Hat OpenShift 4 . 7.6.6.1. Disabling the Machine Config Operator from automatically rebooting by using the console To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can use the OpenShift Container Platform web console to modify the machine config pool (MCP) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process. Note See second NOTE in Disabling the Machine Config Operator from automatically rebooting . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure To pause or unpause automatic MCO update rebooting: Pause the autoreboot process: Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Click Compute MachineConfigPools . On the MachineConfigPools page, click either master or worker , depending upon which nodes you want to pause rebooting for. On the master or worker page, click YAML . In the YAML, update the spec.paused field to true . Sample MachineConfigPool object apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool # ... spec: # ... paused: true 1 # ... 1 Update the spec.paused field to true to pause rebooting. To verify that the MCP is paused, return to the MachineConfigPools page. On the MachineConfigPools page, the Paused column reports True for the MCP you modified. If the MCP has pending changes while paused, the Updated column is False and Updating is False . When Updated is True and Updating is False , there are no pending changes. Important If there are pending changes (where both the Updated and Updating columns are False ), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot. Unpause the autoreboot process: Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Click Compute MachineConfigPools . On the MachineConfigPools page, click either master or worker , depending upon which nodes you want to pause rebooting for. On the master or worker page, click YAML . In the YAML, update the spec.paused field to false . Sample MachineConfigPool object apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool # ... spec: # ... paused: false 1 # ... 1 Update the spec.paused field to false to allow rebooting. Note By unpausing an MCP, the MCO applies all paused changes reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed. To verify that the MCP is paused, return to the MachineConfigPools page. On the MachineConfigPools page, the Paused column reports False for the MCP you modified. If the MCP is applying any pending changes, the Updated column is False and the Updating column is True . When Updated is True and Updating is False , there are no further changes being made. 7.6.6.2. Disabling the Machine Config Operator from automatically rebooting by using the CLI To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can modify the machine config pool (MCP) using the OpenShift CLI (oc) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process. Note See second NOTE in Disabling the Machine Config Operator from automatically rebooting . Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure To pause or unpause automatic MCO update rebooting: Pause the autoreboot process: Update the MachineConfigPool custom resource to set the spec.paused field to true . Control plane (master) nodes USD oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/master Worker nodes USD oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/worker Verify that the MCP is paused: Control plane (master) nodes USD oc get machineconfigpool/master --template='{{.spec.paused}}' Worker nodes USD oc get machineconfigpool/worker --template='{{.spec.paused}}' Example output true The spec.paused field is true and the MCP is paused. Determine if the MCP has pending changes: # oc get machineconfigpool Example output If the UPDATED column is False and UPDATING is False , there are pending changes. When UPDATED is True and UPDATING is False , there are no pending changes. In the example, the worker node has pending changes. The control plane node does not have any pending changes. Important If there are pending changes (where both the Updated and Updating columns are False ), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot. Unpause the autoreboot process: Update the MachineConfigPool custom resource to set the spec.paused field to false . Control plane (master) nodes USD oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/master Worker nodes USD oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/worker Note By unpausing an MCP, the MCO applies all paused changes and reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed. Verify that the MCP is unpaused: Control plane (master) nodes USD oc get machineconfigpool/master --template='{{.spec.paused}}' Worker nodes USD oc get machineconfigpool/worker --template='{{.spec.paused}}' Example output false The spec.paused field is false and the MCP is unpaused. Determine if the MCP has pending changes: USD oc get machineconfigpool Example output If the MCP is applying any pending changes, the UPDATED column is False and the UPDATING column is True . When UPDATED is True and UPDATING is False , there are no further changes being made. In the example, the MCO is updating the worker node. 7.6.7. Refreshing failing subscriptions In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace namespace that are failing with the following errors: Example output ImagePullBackOff for Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e" Example output rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade. You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator. Prerequisites You have a failing subscription that is unable to pull an inaccessible bundle image. You have confirmed that the correct bundle image is accessible. Procedure Get the names of the Subscription and ClusterServiceVersion objects from the namespace where the Operator is installed: USD oc get sub,csv -n <namespace> Example output NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded Delete the subscription: USD oc delete subscription <subscription_name> -n <namespace> Delete the cluster service version: USD oc delete csv <csv_name> -n <namespace> Get the names of any failing jobs and related config maps in the openshift-marketplace namespace: USD oc get job,configmap -n openshift-marketplace Example output NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s Delete the job: USD oc delete job <job_name> -n openshift-marketplace This ensures pods that try to pull the inaccessible image are not recreated. Delete the config map: USD oc delete configmap <configmap_name> -n openshift-marketplace Reinstall the Operator using OperatorHub in the web console. Verification Check that the Operator has been reinstalled successfully: USD oc get sub,csv,installplan -n <namespace> 7.6.8. Reinstalling Operators after failed uninstallation You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages. For example: Example Project resource description These types of issues can prevent an Operator from being reinstalled successfully. Warning Forced deletion of a namespace is not likely to resolve "Terminating" state issues and can lead to unstable or unpredictable cluster behavior, so it is better to try to find related resources that might be preventing the namespace from being deleted. For more information, see the Red Hat Knowledgebase Solution #4165791 , paying careful attention to the cautions and warnings. The following procedure shows how to troubleshoot when an Operator cannot be reinstalled because an existing custom resource definition (CRD) from a installation of the Operator is preventing a related namespace from deleting successfully. Procedure Check if there are any namespaces related to the Operator that are stuck in "Terminating" state: USD oc get namespaces Example output Check if there are any CRDs related to the Operator that are still present after the failed uninstallation: USD oc get crds Note CRDs are global cluster definitions; the actual custom resource (CR) instances related to the CRDs could be in other namespaces or be global cluster instances. If there are any CRDs that you know were provided or managed by the Operator and that should have been deleted after uninstallation, delete the CRD: USD oc delete crd <crd_name> Check if there are any remaining CR instances related to the Operator that are still present after uninstallation, and if so, delete the CRs: The type of CRs to search for can be difficult to determine after uninstallation and can require knowing what CRDs the Operator manages. For example, if you are troubleshooting an uninstallation of the etcd Operator, which provides the EtcdCluster CRD, you can search for remaining EtcdCluster CRs in a namespace: USD oc get EtcdCluster -n <namespace_name> Alternatively, you can search across all namespaces: USD oc get EtcdCluster --all-namespaces If there are any remaining CRs that should be removed, delete the instances: USD oc delete <cr_name> <cr_instance_name> -n <namespace_name> Check that the namespace deletion has successfully resolved: USD oc get namespace <namespace_name> Important If the namespace or other Operator resources are still not uninstalled cleanly, contact Red Hat Support. Reinstall the Operator using OperatorHub in the web console. Verification Check that the Operator has been reinstalled successfully: USD oc get sub,csv,installplan -n <namespace> Additional resources Deleting Operators from a cluster Adding Operators to a cluster 7.7. Investigating pod issues OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host. A pod is the smallest compute unit that can be defined, deployed, and managed on OpenShift Container Platform 4.13. After a pod is defined, it is assigned to run on a node until its containers exit, or until it is removed. Depending on policy and exit code, Pods are either removed after exiting or retained so that their logs can be accessed. The first thing to check when pod issues arise is the pod's status. If an explicit pod failure has occurred, observe the pod's error state to identify specific image, container, or pod network issues. Focus diagnostic data collection according to the error state. Review pod event messages, as well as pod and container log information. Diagnose issues dynamically by accessing running Pods on the command line, or start a debug pod with root access based on a problematic pod's deployment configuration. 7.7.1. Understanding pod error states Pod failures return explicit error states that can be observed in the status field in the output of oc get pods . Pod error states cover image, container, and container network related failures. The following table provides a list of pod error states along with their descriptions. Table 7.3. Pod error states Pod error state Description ErrImagePull Generic image retrieval error. ErrImagePullBackOff Image retrieval failed and is backed off. ErrInvalidImageName The specified image name was invalid. ErrImageInspect Image inspection did not succeed. ErrImageNeverPull PullPolicy is set to NeverPullImage and the target image is not present locally on the host. ErrRegistryUnavailable When attempting to retrieve an image from a registry, an HTTP error was encountered. ErrContainerNotFound The specified container is either not present or not managed by the kubelet, within the declared pod. ErrRunInitContainer Container initialization failed. ErrRunContainer None of the pod's containers started successfully. ErrKillContainer None of the pod's containers were killed successfully. ErrCrashLoopBackOff A container has terminated. The kubelet will not attempt to restart it. ErrVerifyNonRoot A container or image attempted to run with root privileges. ErrCreatePodSandbox Pod sandbox creation did not succeed. ErrConfigPodSandbox Pod sandbox configuration was not obtained. ErrKillPodSandbox A pod sandbox did not stop successfully. ErrSetupNetwork Network initialization failed. ErrTeardownNetwork Network termination failed. 7.7.2. Reviewing pod status You can query pod status and error states. You can also query a pod's associated deployment configuration and review base image availability. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). skopeo is installed. Procedure Switch into a project: USD oc project <project_name> List pods running within the namespace, as well as pod status, error states, restarts, and age: USD oc get pods Determine whether the namespace is managed by a deployment configuration: USD oc status If the namespace is managed by a deployment configuration, the output includes the deployment configuration name and a base image reference. Inspect the base image referenced in the preceding command's output: USD skopeo inspect docker://<image_reference> If the base image reference is not correct, update the reference in the deployment configuration: USD oc edit deployment/my-deployment When deployment configuration changes on exit, the configuration will automatically redeploy. Watch pod status as the deployment progresses, to determine whether the issue has been resolved: USD oc get pods -w Review events within the namespace for diagnostic information relating to pod failures: USD oc get events 7.7.3. Inspecting pod and container logs You can inspect pod and container logs for warnings and error messages related to explicit pod failures. Depending on policy and exit code, pod and container logs remain available after pods have been terminated. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Query logs for a specific pod: USD oc logs <pod_name> Query logs for a specific container within a pod: USD oc logs <pod_name> -c <container_name> Logs retrieved using the preceding oc logs commands are composed of messages sent to stdout within pods or containers. Inspect logs contained in /var/log/ within a pod. List log files and subdirectories contained in /var/log within a pod: USD oc exec <pod_name> -- ls -alh /var/log Example output total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp Query a specific log file contained in /var/log within a pod: USD oc exec <pod_name> cat /var/log/<path_to_log> Example output 2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO List log files and subdirectories contained in /var/log within a specific container: USD oc exec <pod_name> -c <container_name> ls /var/log Query a specific log file contained in /var/log within a specific container: USD oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log> 7.7.4. Accessing running pods You can review running pods dynamically by opening a shell inside a pod or by gaining network access through port forwarding. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Switch into the project that contains the pod you would like to access. This is necessary because the oc rsh command does not accept the -n namespace option: USD oc project <namespace> Start a remote shell into a pod: USD oc rsh <pod_name> 1 1 If a pod has multiple containers, oc rsh defaults to the first container unless -c <container_name> is specified. Start a remote shell into a specific container within a pod: USD oc rsh -c <container_name> pod/<pod_name> Create a port forwarding session to a port on a pod: USD oc port-forward <pod_name> <host_port>:<pod_port> 1 1 Enter Ctrl+C to cancel the port forwarding session. 7.7.5. Starting debug pods with root access You can start a debug pod with root access, based on a problematic pod's deployment or deployment configuration. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Start a debug pod with root access, based on a deployment. Obtain a project's deployment name: USD oc get deployment -n <project_name> Start a debug pod with root privileges, based on the deployment: USD oc debug deployment/my-deployment --as-root -n <project_name> Start a debug pod with root access, based on a deployment configuration. Obtain a project's deployment configuration name: USD oc get deploymentconfigs -n <project_name> Start a debug pod with root privileges, based on the deployment configuration: USD oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name> Note You can append -- <command> to the preceding oc debug commands to run individual commands within a debug pod, instead of running an interactive shell. 7.7.6. Copying files to and from pods and containers You can copy files to and from a pod to test configuration changes or gather diagnostic information. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Copy a file to a pod: USD oc cp <local_path> <pod_name>:/<path> -c <container_name> 1 1 The first container in a pod is selected if the -c option is not specified. Copy a file from a pod: USD oc cp <pod_name>:/<path> -c <container_name> <local_path> 1 1 The first container in a pod is selected if the -c option is not specified. Note For oc cp to function, the tar binary must be available within the container. 7.8. Troubleshooting the Source-to-Image process 7.8.1. Strategies for Source-to-Image troubleshooting Use Source-to-Image (S2I) to build reproducible, Docker-formatted container images. You can create ready-to-run images by injecting application source code into a container image and assembling a new image. The new image incorporates the base image (the builder) and built source. To determine where in the S2I process a failure occurs, you can observe the state of the pods relating to each of the following S2I stages: During the build configuration stage , a build pod is used to create an application container image from a base image and application source code. During the deployment configuration stage , a deployment pod is used to deploy application pods from the application container image that was built in the build configuration stage. The deployment pod also deploys other resources such as services and routes. The deployment configuration begins after the build configuration succeeds. After the deployment pod has started the application pods , application failures can occur within the running application pods. For instance, an application might not behave as expected even though the application pods are in a Running state. In this scenario, you can access running application pods to investigate application failures within a pod. When troubleshooting S2I issues, follow this strategy: Monitor build, deployment, and application pod status Determine the stage of the S2I process where the problem occurred Review logs corresponding to the failed stage 7.8.2. Gathering Source-to-Image diagnostic data The S2I tool runs a build pod and a deployment pod in sequence. The deployment pod is responsible for deploying the application pods based on the application container image created in the build stage. Watch build, deployment and application pod status to determine where in the S2I process a failure occurs. Then, focus diagnostic data collection accordingly. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). Procedure Watch the pod status throughout the S2I process to determine at which stage a failure occurs: USD oc get pods -w 1 1 Use -w to monitor pods for changes until you quit the command using Ctrl+C . Review a failed pod's logs for errors. If the build pod fails , review the build pod's logs: USD oc logs -f pod/<application_name>-<build_number>-build Note Alternatively, you can review the build configuration's logs using oc logs -f bc/<application_name> . The build configuration's logs include the logs from the build pod. If the deployment pod fails , review the deployment pod's logs: USD oc logs -f pod/<application_name>-<build_number>-deploy Note Alternatively, you can review the deployment configuration's logs using oc logs -f dc/<application_name> . This outputs logs from the deployment pod until the deployment pod completes successfully. The command outputs logs from the application pods if you run it after the deployment pod has completed. After a deployment pod completes, its logs can still be accessed by running oc logs -f pod/<application_name>-<build_number>-deploy . If an application pod fails, or if an application is not behaving as expected within a running application pod , review the application pod's logs: USD oc logs -f pod/<application_name>-<build_number>-<random_string> 7.8.3. Gathering application diagnostic data to investigate application failures Application failures can occur within running application pods. In these situations, you can retrieve diagnostic information with these strategies: Review events relating to the application pods. Review the logs from the application pods, including application-specific log files that are not collected by the OpenShift Logging framework. Test application functionality interactively and run diagnostic tools in an application container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List events relating to a specific application pod. The following example retrieves events for an application pod named my-app-1-akdlg : USD oc describe pod/my-app-1-akdlg Review logs from an application pod: USD oc logs -f pod/my-app-1-akdlg Query specific logs within a running application pod. Logs that are sent to stdout are collected by the OpenShift Logging framework and are included in the output of the preceding command. The following query is only required for logs that are not sent to stdout. If an application log can be accessed without root privileges within a pod, concatenate the log file as follows: USD oc exec my-app-1-akdlg -- cat /var/log/my-application.log If root access is required to view an application log, you can start a debug container with root privileges and then view the log file from within the container. Start the debug container from the project's DeploymentConfig object. Pod users typically run with non-root privileges, but running troubleshooting pods with temporary root privileges can be useful during issue investigation: USD oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log Note You can access an interactive shell with root access within the debug pod if you run oc debug dc/<deployment_configuration> --as-root without appending -- <command> . Test application functionality interactively and run diagnostic tools, in an application container with an interactive shell. Start an interactive shell on the application container: USD oc exec -it my-app-1-akdlg /bin/bash Test application functionality interactively from within the shell. For example, you can run the container's entry point command and observe the results. Then, test changes from the command line directly, before updating the source code and rebuilding the application container through the S2I process. Run diagnostic binaries available within the container. Note Root privileges are required to run some diagnostic binaries. In these situations you can start a debug pod with root access, based on a problematic pod's DeploymentConfig object, by running oc debug dc/<deployment_configuration> --as-root . Then, you can run diagnostic binaries as root from within the debug pod. If diagnostic binaries are not available within a container, you can run a host's diagnostic binaries within a container's namespace by using nsenter . The following example runs ip ad within a container's namespace, using the host`s ip binary. Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.13 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Determine the target container ID: # crictl ps Determine the container's process ID. In this example, the target container ID is a7fe32346b120 : # crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print USD2}' Run ip ad within the container's namespace, using the host's ip binary. This example uses 31150 as the container's process ID. The nsenter command enters the namespace of a target process and runs a command in its namespace. Because the target process in this example is a container's process ID, the ip ad command is run in the container's namespace from the host: # nsenter -n -t 31150 -- ip ad Note Running a host's diagnostic binaries within a container's namespace is only possible if you are using a privileged container such as a debug node. 7.8.4. Additional resources See Source-to-Image (S2I) build for more details about the S2I build strategy. 7.9. Troubleshooting storage issues 7.9.1. Resolving multi-attach errors When a node crashes or shuts down abruptly, the attached ReadWriteOnce (RWO) volume is expected to be unmounted from the node so that it can be used by a pod scheduled on another node. However, mounting on a new node is not possible because the failed node is unable to unmount the attached volume. A multi-attach error is reported: Example output Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume "pvc-8837384d-69d7-40b2-b2e6-5df86943eef9" Volume is already used by pod(s) sso-mysql-1-ns6b4 Procedure To resolve the multi-attach issue, use one of the following solutions: Enable multiple attachments by using RWX volumes. For most storage solutions, you can use ReadWriteMany (RWX) volumes to prevent multi-attach errors. Recover or delete the failed node when using an RWO volume. For storage that does not support RWX, such as VMware vSphere, RWO volumes must be used instead. However, RWO volumes cannot be mounted on multiple nodes. If you encounter a multi-attach error message with an RWO volume, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached. USD oc delete pod <old_pod> --force=true --grace-period=0 This command deletes the volumes stuck on shutdown or crashed nodes after six minutes. 7.10. Troubleshooting Windows container workload issues 7.10.1. Windows Machine Config Operator does not install If you have completed the process of installing the Windows Machine Config Operator (WMCO), but the Operator is stuck in the InstallWaiting phase, your issue is likely caused by a networking issue. The WMCO requires your OpenShift Container Platform cluster to be configured with hybrid networking using OVN-Kubernetes; the WMCO cannot complete the installation process without hybrid networking available. This is necessary to manage nodes on multiple operating systems (OS) and OS variants. This must be completed during the installation of your cluster. For more information, see Configuring hybrid networking . 7.10.2. Investigating why Windows Machine does not become compute node There are various reasons why a Windows Machine does not become a compute node. The best way to investigate this problem is to collect the Windows Machine Config Operator (WMCO) logs. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. Procedure Run the following command to collect the WMCO logs: USD oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator 7.10.3. Accessing a Windows node Windows nodes cannot be accessed using the oc debug node command; the command requires running a privileged pod on the node, which is not yet supported for Windows. Instead, a Windows node can be accessed using a secure shell (SSH) or Remote Desktop Protocol (RDP). An SSH bastion is required for both methods. 7.10.3.1. Accessing a Windows node using SSH You can access a Windows node by using a secure shell (SSH). Prerequisites You have installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. You have added the key used in the cloud-private-key secret and the key used when creating the cluster to the ssh-agent. For security reasons, remember to remove the keys from the ssh-agent after use. You have connected to the Windows node using an ssh-bastion pod . Procedure Access the Windows node by running the following command: USD ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no \ -o ServerAliveInterval=30 -W %h:%p core@USD(oc get service --all-namespaces -l run=ssh-bastion \ -o go-template="{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}")' <username>@<windows_node_internal_ip> 1 2 1 Specify the cloud provider username, such as Administrator for Amazon Web Services (AWS) or capi for Microsoft Azure. 2 Specify the internal IP address of the node, which can be discovered by running the following command: USD oc get nodes <node_name> -o jsonpath={.status.addresses[?\(@.type==\"InternalIP\"\)].address} 7.10.3.2. Accessing a Windows node using RDP You can access a Windows node by using a Remote Desktop Protocol (RDP). Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. You have added the key used in the cloud-private-key secret and the key used when creating the cluster to the ssh-agent. For security reasons, remember to remove the keys from the ssh-agent after use. You have connected to the Windows node using an ssh-bastion pod . Procedure Run the following command to set up an SSH tunnel: USD ssh -L 2020:<windows_node_internal_ip>:3389 \ 1 core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template="{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}") 1 Specify the internal IP address of the node, which can be discovered by running the following command: USD oc get nodes <node_name> -o jsonpath={.status.addresses[?\(@.type==\"InternalIP\"\)].address} From within the resulting shell, SSH into the Windows node and run the following command to create a password for the user: C:\> net user <username> * 1 1 Specify the cloud provider user name, such as Administrator for AWS or capi for Azure. You can now remotely access the Windows node at localhost:2020 using an RDP client. 7.10.4. Collecting Kubernetes node logs for Windows containers Windows container logging works differently from Linux container logging; the Kubernetes node logs for Windows workloads are streamed to the C:\var\logs directory by default. Therefore, you must gather the Windows node logs from that directory. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. Procedure To view the logs under all directories in C:\var\logs , run the following command: USD oc adm node-logs -l kubernetes.io/os=windows --path= \ /ip-10-0-138-252.us-east-2.compute.internal containers \ /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay \ /ip-10-0-138-252.us-east-2.compute.internal kube-proxy \ /ip-10-0-138-252.us-east-2.compute.internal kubelet \ /ip-10-0-138-252.us-east-2.compute.internal pods You can now list files in the directories using the same command and view the individual log files. For example, to view the kubelet logs, run the following command: USD oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log 7.10.5. Collecting Windows application event logs The Get-WinEvent shim on the kubelet logs endpoint can be used to collect application event logs from Windows machines. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. Procedure To view logs from all applications logging to the event logs on the Windows machine, run: USD oc adm node-logs -l kubernetes.io/os=windows --path=journal The same command is executed when collecting logs with oc adm must-gather . Other Windows application logs from the event log can also be collected by specifying the respective service with a -u flag. For example, you can run the following command to collect logs for the docker runtime service: USD oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker 7.10.6. Collecting Docker logs for Windows containers The Windows Docker service does not stream its logs to stdout, but instead, logs to the event log for Windows. You can view the Docker event logs to investigate issues you think might be caused by the Windows Docker service. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You have created a Windows compute machine set. Procedure SSH into the Windows node and enter PowerShell: C:\> powershell View the Docker logs by running the following command: C:\> Get-EventLog -LogName Application -Source Docker 7.10.7. Additional resources Containers on Windows troubleshooting Troubleshoot host and container image mismatches Docker for Windows troubleshooting Common Kubernetes problems with Windows 7.11. Investigating monitoring issues OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. In OpenShift Container Platform 4.13, cluster administrators can optionally enable monitoring for user-defined projects. Use these procedures if the following issues occur: Your own metrics are unavailable. Prometheus is consuming a lot of disk space. The KubePersistentVolumeFillingUp alert is firing for Prometheus. 7.11.1. Investigating why user-defined project metrics are unavailable ServiceMonitor resources enable you to determine how to use the metrics exposed by a service in user-defined projects. Follow the steps outlined in this procedure if you have created a ServiceMonitor resource but cannot see any corresponding metrics in the Metrics UI. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have enabled and configured monitoring for user-defined projects. You have created a ServiceMonitor resource. Procedure Check that the corresponding labels match in the service and ServiceMonitor resource configurations. Obtain the label defined in the service. The following example queries the prometheus-example-app service in the ns1 project: USD oc -n ns1 get service prometheus-example-app -o yaml Example output labels: app: prometheus-example-app Check that the matchLabels definition in the ServiceMonitor resource configuration matches the label output in the preceding step. The following example queries the prometheus-example-monitor service monitor in the ns1 project: USD oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml Example output apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app Note You can check service and ServiceMonitor resource labels as a developer with view permissions for the project. Inspect the logs for the Prometheus Operator in the openshift-user-workload-monitoring project. List the pods in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get pods Example output NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m Obtain the logs from the prometheus-operator container in the prometheus-operator pod. In the following example, the pod is called prometheus-operator-776fcbbd56-2nbfm : USD oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator If there is a issue with the service monitor, the logs might include an error similar to this example: level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg="skipping servicemonitor" error="it accesses file system via bearer token file which Prometheus specification prohibits" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload Review the target status for your endpoint on the Metrics targets page in the OpenShift Container Platform web console UI. Log in to the OpenShift Container Platform web console and navigate to Observe Targets in the Administrator perspective. Locate the metrics endpoint in the list, and review the status of the target in the Status column. If the Status is Down , click the URL for the endpoint to view more information on the Target Details page for that metrics target. Configure debug level logging for the Prometheus Operator in the openshift-user-workload-monitoring project. Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add logLevel: debug for prometheusOperator under data/config.yaml to set the log level to debug : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug # ... Save the file to apply the changes. The affected prometheus-operator pod is automatically redeployed. Confirm that the debug log-level has been applied to the prometheus-operator deployment in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep "log-level" Example output - --log-level=debug Debug level logging will show all calls made by the Prometheus Operator. Check that the prometheus-operator pod is running: USD oc -n openshift-user-workload-monitoring get pods Note If an unrecognized Prometheus Operator loglevel value is included in the config map, the prometheus-operator pod might not restart successfully. Review the debug logs to see if the Prometheus Operator is using the ServiceMonitor resource. Review the logs for other related errors. Additional resources Creating a user-defined workload monitoring config map See Specifying how a service is monitored for details on how to create a service monitor or pod monitor See Getting detailed information about a metrics target 7.11.2. Determining why Prometheus is consuming a lot of disk space Developers can create labels to define attributes for metrics in the form of key-value pairs. The number of potential key-value pairs corresponds to the number of possible values for an attribute. An attribute that has an unlimited number of potential values is called an unbound attribute. For example, a customer_id attribute is unbound because it has an infinite number of possible values. Every assigned key-value pair has a unique time series. The use of many unbound attributes in labels can result in an exponential increase in the number of time series created. This can impact Prometheus performance and can consume a lot of disk space. You can use the following measures when Prometheus consumes a lot of disk: Check the time series database (TSDB) status using the Prometheus HTTP API for more information about which labels are creating the most time series data. Doing so requires cluster administrator privileges. Check the number of scrape samples that are being collected. Reduce the number of unique time series that are created by reducing the number of unbound attributes that are assigned to user-defined metrics. Note Using attributes that are bound to a limited set of possible values reduces the number of potential key-value pair combinations. Enforce limits on the number of samples that can be scraped across user-defined projects. This requires cluster administrator privileges. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective, navigate to Observe Metrics . Enter a Prometheus Query Language (PromQL) query in the Expression field. The following example queries help to identify high cardinality metrics that might result in high disk space consumption: By running the following query, you can identify the ten jobs that have the highest number of scrape samples: topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling))) By running the following query, you can pinpoint time series churn by identifying the ten jobs that have created the most time series data in the last hour: topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h]))) Investigate the number of unbound label values assigned to metrics with higher than expected scrape sample counts: If the metrics relate to a user-defined project , review the metrics key-value pairs assigned to your workload. These are implemented through Prometheus client libraries at the application level. Try to limit the number of unbound attributes referenced in your labels. If the metrics relate to a core OpenShift Container Platform project , create a Red Hat support case on the Red Hat Customer Portal . Review the TSDB status using the Prometheus HTTP API by following these steps when logged in as a cluster administrator: Get the Prometheus API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}') Extract an authentication token by running the following command: USD TOKEN=USD(oc whoami -t) Query the TSDB status for Prometheus by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v1/status/tsdb" Example output "status": "success","data":{"headStats":{"numSeries":507473, "numLabelPairs":19832,"chunkCount":946298,"minTime":1712253600010, "maxTime":1712257935346},"seriesCountByMetricName": [{"name":"etcd_request_duration_seconds_bucket","value":51840}, {"name":"apiserver_request_sli_duration_seconds_bucket","value":47718}, ... Additional resources See Setting a scrape sample limit for user-defined projects for details on how to set a scrape sample limit and create related alerting rules 7.11.3. Resolving the KubePersistentVolumeFillingUp alert firing for Prometheus As a cluster administrator, you can resolve the KubePersistentVolumeFillingUp alert being triggered for Prometheus. The critical alert fires when a persistent volume (PV) claimed by a prometheus-k8s-* pod in the openshift-monitoring project has less than 3% total space remaining. This can cause Prometheus to function abnormally. Note There are two KubePersistentVolumeFillingUp alerts: Critical alert : The alert with the severity="critical" label is triggered when the mounted PV has less than 3% total space remaining. Warning alert : The alert with the severity="warning" label is triggered when the mounted PV has less than 15% total space remaining and is expected to fill up within four days. To address this issue, you can remove Prometheus time-series database (TSDB) blocks to create more space for the PV. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have installed the OpenShift CLI ( oc ). Procedure List the size of all TSDB blocks, sorted from oldest to newest, by running the following command: USD oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \ 2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'cd /prometheus/;du -hs USD(ls -dtr */ | grep -Eo "[0-9|A-Z]{26}")' 1 2 Replace <prometheus_k8s_pod_name> with the pod mentioned in the KubePersistentVolumeFillingUp alert description. Example output 308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B Identify which and how many blocks could be removed, then remove the blocks. The following example command removes the three oldest Prometheus TSDB blocks from the prometheus-k8s-0 pod: USD oc debug prometheus-k8s-0 -n openshift-monitoring \ -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 \ -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') \ -- sh -c 'ls -latr /prometheus/ | egrep -o "[0-9|A-Z]{26}" | head -3 | \ while read BLOCK; do rm -r /prometheus/USDBLOCK; done' Verify the usage of the mounted PV and ensure there is enough space available by running the following command: USD oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \ 2 -o jsonpath='{.spec.containers[?(@.name=="prometheus")].image}') -- df -h /prometheus/ 1 2 Replace <prometheus_k8s_pod_name> with the pod mentioned in the KubePersistentVolumeFillingUp alert description. The following example output shows the mounted PV claimed by the prometheus-k8s-0 pod that has 63% of space remaining: Example output Starting pod/prometheus-k8s-0-debug-j82w4 ... Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod ... 7.12. Diagnosing OpenShift CLI ( oc ) issues 7.12.1. Understanding OpenShift CLI ( oc ) log levels With the OpenShift CLI ( oc ), you can create applications and manage OpenShift Container Platform projects from a terminal. If oc command-specific issues arise, increase the oc log level to output API request, API response, and curl request details generated by the command. This provides a granular view of a particular oc command's underlying operation, which in turn might provide insight into the nature of a failure. oc log levels range from 1 to 10. The following table provides a list of oc log levels, along with their descriptions. Table 7.4. OpenShift CLI (oc) log levels Log level Description 1 to 5 No additional logging to stderr. 6 Log API requests to stderr. 7 Log API requests and headers to stderr. 8 Log API requests, headers, and body, plus API response headers and body to stderr. 9 Log API requests, headers, and body, API response headers and body, plus curl requests to stderr. 10 Log API requests, headers, and body, API response headers and body, plus curl requests to stderr, in verbose detail. 7.12.2. Specifying OpenShift CLI ( oc ) log levels You can investigate OpenShift CLI ( oc ) issues by increasing the command's log level. The OpenShift Container Platform user's current session token is typically included in logged curl requests where required. You can also obtain the current user's session token manually, for use when testing aspects of an oc command's underlying process step-by-step. Prerequisites Install the OpenShift CLI ( oc ). Procedure Specify the oc log level when running an oc command: USD oc <command> --loglevel <log_level> where: <command> Specifies the command you are running. <log_level> Specifies the log level to apply to the command. To obtain the current user's session token, run the following command: USD oc whoami -t Example output sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6...
|
[
"ssh <user_name>@<load_balancer> systemctl status haproxy",
"ssh <user_name>@<load_balancer> netstat -nltupe | grep -E ':80|:443|:6443|:22623'",
"ssh <user_name>@<load_balancer> ss -nltupe | grep -E ':80|:443|:6443|:22623'",
"dig <wildcard_fqdn> @<dns_server>",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete --log-level debug 1",
"./openshift-install create ignition-configs --dir=./install_dir",
"tail -f ~/<installation_directory>/.openshift_install.log",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u crio",
"ssh [email protected]_name.sub_domain.domain journalctl -b -f -u crio.service",
"curl -I http://<http_server_fqdn>:<port>/bootstrap.ign 1",
"grep -is 'bootstrap.ign' /var/log/httpd/access_log",
"ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service",
"ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'",
"curl -I http://<http_server_fqdn>:<port>/master.ign 1",
"grep -is 'master.ign' /var/log/httpd/access_log",
"oc get nodes",
"oc describe node <master_node>",
"oc get daemonsets -n openshift-sdn",
"oc get pods -n openshift-sdn",
"oc logs <sdn_pod> -n openshift-sdn",
"oc get network.config.openshift.io cluster -o yaml",
"./openshift-install create manifests",
"oc get pods -n openshift-network-operator",
"oc logs pod/<network_operator_pod_name> -n openshift-network-operator",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u crio",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps -a",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"curl https://api-int.<cluster_name>:22623/config/master",
"dig api-int.<cluster_name> @<dns_server>",
"dig -x <load_balancer_mco_ip_address> @<dns_server>",
"ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/master",
"ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking",
"openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text",
"oc get pods -n openshift-etcd",
"oc get pods -n openshift-etcd-operator",
"oc describe pod/<pod_name> -n <namespace>",
"oc logs pod/<pod_name> -n <namespace>",
"oc logs pod/<pod_name> -c <container_name> -n <namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods --name=etcd-",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps | grep '<pod_id>'",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"oc adm node-logs --role=master -u kubelet",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=master -u kubelet | grep -is 'x509: certificate has expired'",
"ssh core@<master-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service | grep -is 'x509: certificate has expired'",
"curl -I http://<http_server_fqdn>:<port>/worker.ign 1",
"grep -is 'worker.ign' /var/log/httpd/access_log",
"oc get nodes",
"oc describe node <worker_node>",
"oc get pods -n openshift-machine-api",
"oc describe pod/<machine_api_operator_pod_name> -n openshift-machine-api",
"oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c machine-api-operator",
"oc logs pod/<machine_api_operator_pod_name> -n openshift-machine-api -c kube-rbac-proxy",
"oc adm node-logs --role=worker -u kubelet",
"ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u kubelet.service",
"oc adm node-logs --role=worker -u crio",
"ssh core@<worker-node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"oc adm node-logs --role=worker --path=sssd",
"oc adm node-logs --role=worker --path=sssd/sssd.log",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/sssd/sssd.log",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl ps -a",
"ssh core@<worker-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"curl https://api-int.<cluster_name>:22623/config/worker",
"dig api-int.<cluster_name> @<dns_server>",
"dig -x <load_balancer_mco_ip_address> @<dns_server>",
"ssh core@<bootstrap_fqdn> curl https://api-int.<cluster_name>:22623/config/worker",
"ssh core@<node>.<cluster_name>.<base_domain> chronyc tracking",
"openssl s_client -connect api-int.<cluster_name>:22623 | openssl x509 -noout -text",
"oc get clusteroperators",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 1 csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending 2 csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc describe clusteroperator <operator_name>",
"oc get pods -n <operator_namespace>",
"oc describe pod/<operator_pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -n <operator_namespace>",
"oc get pod -o \"jsonpath={range .status.containerStatuses[*]}{.name}{'\\t'}{.state}{'\\t'}{.image}{'\\n'}{end}\" <operator_pod_name> -n <operator_namespace>",
"oc adm release info <image_path>:<tag> --commits",
"./openshift-install gather bootstrap --dir <installation_directory> 1",
"./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address>\" 5",
"INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"",
"oc get nodes",
"oc adm top nodes",
"oc adm top node my-node",
"oc debug node/my-node",
"chroot /host",
"systemctl is-active kubelet",
"systemctl status kubelet",
"oc adm node-logs --role=master -u kubelet 1",
"oc adm node-logs --role=master --path=openshift-apiserver",
"oc adm node-logs --role=master --path=openshift-apiserver/audit.log",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log",
"oc debug node/my-node",
"chroot /host",
"systemctl is-active crio",
"systemctl status crio.service",
"oc adm node-logs --role=master -u crio",
"oc adm node-logs <node_name> -u crio",
"ssh core@<node>.<cluster_name>.<base_domain> journalctl -b -f -u crio.service",
"Failed to create pod sandbox: rpc error: code = Unknown desc = failed to mount container XXX: error recreating the missing symlinks: error reading name of symlink for XXX: open /var/lib/containers/storage/overlay/XXX/link: no such file or directory",
"can't stat lower layer ... because it does not exist. Going through storage to recreate the missing symlinks.",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --ignore-daemonsets --delete-emptydir-data",
"ssh [email protected] sudo -i",
"systemctl stop kubelet",
".. for pod in USD(crictl pods -q); do if [[ \"USD(crictl inspectp USDpod | jq -r .status.linux.namespaces.options.network)\" != \"NODE\" ]]; then crictl rmp -f USDpod; fi; done",
"crictl rmp -fa",
"systemctl stop crio",
"crio wipe -f",
"systemctl start crio systemctl start kubelet",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready, SchedulingDisabled master 133m v1.26.0",
"oc adm uncordon <node_name>",
"NAME STATUS ROLES AGE VERSION ci-ln-tkbxyft-f76d1-nvwhr-master-1 Ready master 133m v1.26.0",
"rpm-ostree kargs --append='crashkernel=256M'",
"systemctl enable kdump.service",
"systemctl reboot",
"variant: openshift version: 4.13.0 metadata: name: 99-worker-kdump 1 labels: machineconfiguration.openshift.io/role: worker 2 openshift: kernel_arguments: 3 - crashkernel=256M storage: files: - path: /etc/kdump.conf 4 mode: 0644 overwrite: true contents: inline: | path /var/crash core_collector makedumpfile -l --message-level 7 -d 31 - path: /etc/sysconfig/kdump 5 mode: 0644 overwrite: true contents: inline: | KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\" KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory mce=off numa=off udev.children-max=2 panic=10 rootflags=nofail acpi_no_memhotplug transparent_hugepage=never nokaslr novmcoredd hest_disable\" 6 KEXEC_ARGS=\"-s\" KDUMP_IMG=\"vmlinuz\" systemd: units: - name: kdump.service enabled: true",
"nfs server.example.com:/export/cores core_collector makedumpfile -l --message-level 7 -d 31 extra_modules nfs",
"butane 99-worker-kdump.bu -o 99-worker-kdump.yaml",
"oc create -f 99-worker-kdump.yaml",
"systemctl --failed",
"journalctl -u <unit>.service",
"NODEIP_HINT=192.0.2.1",
"echo -n 'NODEIP_HINT=192.0.2.1' | base64 -w0",
"Tk9ERUlQX0hJTlQ9MTkyLjAuMCxxxx==",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-nodeip-hint-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-nodeip-hint-worker spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<encoded_content> 1 mode: 0644 overwrite: true path: /etc/default/nodeip-configuration",
"E0514 12:47:17.998892 2281 daemon.go:1350] content mismatch for file /etc/systemd/system/ovs-vswitchd.service: [Unit]",
"oc debug node/<node_name>",
"chroot /host",
"ovs-appctl vlog/list",
"console syslog file ------- ------ ------ backtrace OFF INFO INFO bfd OFF INFO INFO bond OFF INFO INFO bridge OFF INFO INFO bundle OFF INFO INFO bundles OFF INFO INFO cfm OFF INFO INFO collectors OFF INFO INFO command_line OFF INFO INFO connmgr OFF INFO INFO conntrack OFF INFO INFO conntrack_tp OFF INFO INFO coverage OFF INFO INFO ct_dpif OFF INFO INFO daemon OFF INFO INFO daemon_unix OFF INFO INFO dns_resolve OFF INFO INFO dpdk OFF INFO INFO",
"Restart=always ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /var/lib/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /etc/openvswitch' ExecStartPre=-/bin/sh -c '/usr/bin/chown -R :USDUSD{OVS_USER_ID##*:} /run/openvswitch' ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg",
"systemctl daemon-reload",
"systemctl restart ovs-vswitchd",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master 1 name: 99-change-ovs-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - dropins: - contents: | [Service] ExecStartPost=-/usr/bin/ovs-appctl vlog/set syslog:dbg 2 ExecReload=-/usr/bin/ovs-appctl vlog/set syslog:dbg name: 20-ovs-vswitchd-restart.conf name: ovs-vswitchd.service",
"oc apply -f 99-change-ovs-loglevel.yaml",
"oc adm node-logs <node_name> -u ovs-vswitchd",
"journalctl -b -f -u ovs-vswitchd.service",
"oc get subs -n <operator_namespace>",
"oc describe sub <subscription_name> -n <operator_namespace>",
"Name: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy",
"oc get catalogsources -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-marketplace Red Hat Marketplace grpc Red Hat 55m redhat-operators Red Hat Operators grpc Red Hat 55m",
"oc describe catalogsource example-catalog -n openshift-marketplace",
"Name: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {\"effect\": \"PreferredDuringScheduling\"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-marketplace-57p8c 1/1 Running 0 36m redhat-operators-smxx8 1/1 Running 0 36m",
"oc describe pod example-catalog-bwt8z -n openshift-marketplace",
"Name: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image \"quay.io/example-org/example-catalog:v1\" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image \"quay.io/example-org/example-catalog:v1\": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePull",
"oc get clusteroperators",
"oc get pod -n <operator_namespace>",
"oc describe pod <operator_pod_name> -n <operator_namespace>",
"oc debug node/my-node",
"chroot /host",
"crictl ps",
"crictl ps --name network-operator",
"oc get pods -n <operator_namespace>",
"oc logs pod/<pod_name> -n <operator_namespace>",
"oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id>",
"ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: true 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool spec: paused: false 1",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":true}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"true",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/master",
"oc patch --type=merge --patch='{\"spec\":{\"paused\":false}}' machineconfigpool/worker",
"oc get machineconfigpool/master --template='{{.spec.paused}}'",
"oc get machineconfigpool/worker --template='{{.spec.paused}}'",
"false",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"ImagePullBackOff for Back-off pulling image \"example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e\"",
"rpc error: code = Unknown desc = error pinging docker registry example.com: Get \"https://example.com/v2/\": dial tcp: lookup example.com on 10.0.0.1:53: no such host",
"oc get sub,csv -n <namespace>",
"NAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded",
"oc delete subscription <subscription_name> -n <namespace>",
"oc delete csv <csv_name> -n <namespace>",
"oc get job,configmap -n openshift-marketplace",
"NAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s",
"oc delete job <job_name> -n openshift-marketplace",
"oc delete configmap <configmap_name> -n openshift-marketplace",
"oc get sub,csv,installplan -n <namespace>",
"message: 'Failed to delete all resource types, 1 remaining: Internal error occurred: error resolving resource'",
"oc get namespaces",
"operator-ns-1 Terminating",
"oc get crds",
"oc delete crd <crd_name>",
"oc get EtcdCluster -n <namespace_name>",
"oc get EtcdCluster --all-namespaces",
"oc delete <cr_name> <cr_instance_name> -n <namespace_name>",
"oc get namespace <namespace_name>",
"oc get sub,csv,installplan -n <namespace>",
"oc project <project_name>",
"oc get pods",
"oc status",
"skopeo inspect docker://<image_reference>",
"oc edit deployment/my-deployment",
"oc get pods -w",
"oc get events",
"oc logs <pod_name>",
"oc logs <pod_name> -c <container_name>",
"oc exec <pod_name> -- ls -alh /var/log",
"total 124K drwxr-xr-x. 1 root root 33 Aug 11 11:23 . drwxr-xr-x. 1 root root 28 Sep 6 2022 .. -rw-rw----. 1 root utmp 0 Jul 10 10:31 btmp -rw-r--r--. 1 root root 33K Jul 17 10:07 dnf.librepo.log -rw-r--r--. 1 root root 69K Jul 17 10:07 dnf.log -rw-r--r--. 1 root root 8.8K Jul 17 10:07 dnf.rpm.log -rw-r--r--. 1 root root 480 Jul 17 10:07 hawkey.log -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 lastlog drwx------. 2 root root 23 Aug 11 11:14 openshift-apiserver drwx------. 2 root root 6 Jul 10 10:31 private drwxr-xr-x. 1 root root 22 Mar 9 08:05 rhsm -rw-rw-r--. 1 root utmp 0 Jul 10 10:31 wtmp",
"oc exec <pod_name> cat /var/log/<path_to_log>",
"2023-07-10T10:29:38+0000 INFO --- logging initialized --- 2023-07-10T10:29:38+0000 DDEBUG timer: config: 13 ms 2023-07-10T10:29:38+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, product-id, repoclosure, repodiff, repograph, repomanage, reposync, subscription-manager, uploadprofile 2023-07-10T10:29:38+0000 INFO Updating Subscription Management repositories. 2023-07-10T10:29:38+0000 INFO Unable to read consumer identity 2023-07-10T10:29:38+0000 INFO Subscription Manager is operating in container mode. 2023-07-10T10:29:38+0000 INFO",
"oc exec <pod_name> -c <container_name> ls /var/log",
"oc exec <pod_name> -c <container_name> cat /var/log/<path_to_log>",
"oc project <namespace>",
"oc rsh <pod_name> 1",
"oc rsh -c <container_name> pod/<pod_name>",
"oc port-forward <pod_name> <host_port>:<pod_port> 1",
"oc get deployment -n <project_name>",
"oc debug deployment/my-deployment --as-root -n <project_name>",
"oc get deploymentconfigs -n <project_name>",
"oc debug deploymentconfig/my-deployment-configuration --as-root -n <project_name>",
"oc cp <local_path> <pod_name>:/<path> -c <container_name> 1",
"oc cp <pod_name>:/<path> -c <container_name> <local_path> 1",
"oc get pods -w 1",
"oc logs -f pod/<application_name>-<build_number>-build",
"oc logs -f pod/<application_name>-<build_number>-deploy",
"oc logs -f pod/<application_name>-<build_number>-<random_string>",
"oc describe pod/my-app-1-akdlg",
"oc logs -f pod/my-app-1-akdlg",
"oc exec my-app-1-akdlg -- cat /var/log/my-application.log",
"oc debug dc/my-deployment-configuration --as-root -- cat /var/log/my-application.log",
"oc exec -it my-app-1-akdlg /bin/bash",
"oc debug node/my-cluster-node",
"chroot /host",
"crictl ps",
"crictl inspect a7fe32346b120 --output yaml | grep 'pid:' | awk '{print USD2}'",
"nsenter -n -t 31150 -- ip ad",
"Unable to attach or mount volumes: unmounted volumes=[sso-mysql-pvol], unattached volumes=[sso-mysql-pvol default-token-x4rzc]: timed out waiting for the condition Multi-Attach error for volume \"pvc-8837384d-69d7-40b2-b2e6-5df86943eef9\" Volume is already used by pod(s) sso-mysql-1-ns6b4",
"oc delete pod <old_pod> --force=true --grace-period=0",
"oc logs -f deployment/windows-machine-config-operator -n openshift-windows-machine-config-operator",
"ssh -t -o StrictHostKeyChecking=no -o ProxyCommand='ssh -A -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -W %h:%p core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")' <username>@<windows_node_internal_ip> 1 2",
"oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}",
"ssh -L 2020:<windows_node_internal_ip>:3389 \\ 1 core@USD(oc get service --all-namespaces -l run=ssh-bastion -o go-template=\"{{ with (index (index .items 0).status.loadBalancer.ingress 0) }}{{ or .hostname .ip }}{{end}}\")",
"oc get nodes <node_name> -o jsonpath={.status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}",
"C:\\> net user <username> * 1",
"oc adm node-logs -l kubernetes.io/os=windows --path= /ip-10-0-138-252.us-east-2.compute.internal containers /ip-10-0-138-252.us-east-2.compute.internal hybrid-overlay /ip-10-0-138-252.us-east-2.compute.internal kube-proxy /ip-10-0-138-252.us-east-2.compute.internal kubelet /ip-10-0-138-252.us-east-2.compute.internal pods",
"oc adm node-logs -l kubernetes.io/os=windows --path=/kubelet/kubelet.log",
"oc adm node-logs -l kubernetes.io/os=windows --path=journal",
"oc adm node-logs -l kubernetes.io/os=windows --path=journal -u docker",
"C:\\> powershell",
"C:\\> Get-EventLog -LogName Application -Source Docker",
"oc -n ns1 get service prometheus-example-app -o yaml",
"labels: app: prometheus-example-app",
"oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml",
"apiVersion: v1 kind: ServiceMonitor metadata: name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app",
"oc -n openshift-user-workload-monitoring get pods",
"NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m",
"oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator",
"level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug",
"oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"",
"- --log-level=debug",
"oc -n openshift-user-workload-monitoring get pods",
"topk(10, max by(namespace, job) (topk by(namespace, job) (1, scrape_samples_post_metric_relabeling)))",
"topk(10, sum by(namespace, job) (sum_over_time(scrape_series_added[1h])))",
"HOST=USD(oc -n openshift-monitoring get route prometheus-k8s -ojsonpath='{.status.ingress[].host}')",
"TOKEN=USD(oc whoami -t)",
"curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/status/tsdb\"",
"\"status\": \"success\",\"data\":{\"headStats\":{\"numSeries\":507473, \"numLabelPairs\":19832,\"chunkCount\":946298,\"minTime\":1712253600010, \"maxTime\":1712257935346},\"seriesCountByMetricName\": [{\"name\":\"etcd_request_duration_seconds_bucket\",\"value\":51840}, {\"name\":\"apiserver_request_sli_duration_seconds_bucket\",\"value\":47718},",
"oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 -c prometheus --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'cd /prometheus/;du -hs USD(ls -dtr */ | grep -Eo \"[0-9|A-Z]{26}\")'",
"308M 01HVKMPKQWZYWS8WVDAYQHNMW6 52M 01HVK64DTDA81799TBR9QDECEZ 102M 01HVK64DS7TRZRWF2756KHST5X 140M 01HVJS59K11FBVAPVY57K88Z11 90M 01HVH2A5Z58SKT810EM6B9AT50 152M 01HV8ZDVQMX41MKCN84S32RRZ1 354M 01HV6Q2N26BK63G4RYTST71FBF 156M 01HV664H9J9Z1FTZD73RD1563E 216M 01HTHXB60A7F239HN7S2TENPNS 104M 01HTHMGRXGS0WXA3WATRXHR36B",
"oc debug prometheus-k8s-0 -n openshift-monitoring -c prometheus --image=USD(oc get po -n openshift-monitoring prometheus-k8s-0 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- sh -c 'ls -latr /prometheus/ | egrep -o \"[0-9|A-Z]{26}\" | head -3 | while read BLOCK; do rm -r /prometheus/USDBLOCK; done'",
"oc debug <prometheus_k8s_pod_name> -n openshift-monitoring \\ 1 --image=USD(oc get po -n openshift-monitoring <prometheus_k8s_pod_name> \\ 2 -o jsonpath='{.spec.containers[?(@.name==\"prometheus\")].image}') -- df -h /prometheus/",
"Starting pod/prometheus-k8s-0-debug-j82w4 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p4 40G 15G 40G 37% /prometheus Removing debug pod",
"oc <command> --loglevel <log_level>",
"oc whoami -t",
"sha256~RCV3Qcn7H-OEfqCGVI0CvnZ6"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/support/troubleshooting
|
Chapter 6. Configure Public Key based SSH Authentication without a password
|
Chapter 6. Configure Public Key based SSH Authentication without a password Configure public key based SSH authentication without a password for the root user on the first hyperconverged host to all hosts, including itself . Do this for all storage and management interfaces, and for both IP addresses and FQDNs. 6.1. Generating SSH key pairs without a password Generating a public/private key pair lets you use key-based SSH authentication. Generating a key pair that does not use a password makes it simpler to use Ansible to automate deployment and configuration processes. Procedure Log in to the first hyperconverged host as the root user. Generate an SSH key that does not use a password. Start the key generation process. Enter a location for the key. The default location, shown in parentheses, is used if no other input is provided. Specify and confirm an empty passphrase by pressing Enter twice. The private key is saved in <location>/<keyname> . The public key is saved in <location>/<keyname>.pub . Warning Your identification in this output is your private key. Never share your private key. Possession of your private key allows someone else to impersonate you on any system that has your public key. 6.2. Copying SSH keys To access a host using your private key, that host needs a copy of your public key. Prerequisites Generate a public/private key pair with no password. Procedure Log in to the first host as the root user. Copy your public key to each host that you want to access, including the host on which you execute the command, using both the front-end and the back-end FQDNs. Enter the password for <user>@<hostname> when prompted. Warning Make sure that you use the file that ends in .pub . Never share your private key. Possession of your private key allows someone else to impersonate you on any system that has your public key. For example, if you are logged in as the root user on server1.example.com , you would run the following commands for a three node deployment:
|
[
"ssh-keygen -t rsa Generating public/private rsa key pair.",
"Enter file in which to save the key (/home/username/.ssh/id_rsa): <location>/<keyname>",
"Enter passphrase (empty for no passphrase): Enter same passphrase again:",
"Your identification has been saved in <location>/<keyname>. Your public key has been saved in <location>/<keyname>.pub. The key fingerprint is SHA256:8BhZageKrLXM99z5f/AM9aPo/KAUd8ZZFPcPFWqK6+M [email protected] The key's randomart image is: +---[ECDSA 256]---+ | . . +=| | . . . = o.o| | + . * . o...| | = . . * . + +..| |. + . . So o * ..| | . o . .+ = ..| | o oo ..=. .| | ooo...+ | | .E++oo | +----[SHA256]-----+",
"ssh-copy-id -i <location>/<keyname>.pub <user>@<hostname>",
"ssh-copy-id -i <location>/<keyname>.pub [email protected] ssh-copy-id -i <location>/<keyname>.pub [email protected] ssh-copy-id -i <location>/<keyname>.pub [email protected] ssh-copy-id -i <location>/<keyname>.pub [email protected] ssh-copy-id -i <location>/<keyname>.pub [email protected] ssh-copy-id -i <location>/<keyname>.pub [email protected]"
] |
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/automating_rhhi_for_virtualization_deployment/task-configure-key-based-ssh-auth
|
Chapter 4. Deploying Red Hat Quay on premise
|
Chapter 4. Deploying Red Hat Quay on premise The following image shows examples for on premise configuration, for the following types of deployments: Standalone Proof of Concept Highly available deployment on multiple hosts Deployment on an OpenShift Container Platform cluster by using the Red Hat Quay Operator On premise example configurations 4.1. Red Hat Quay example deployments The following image shows three possible deployments for Red Hat Quay: Deployment examples Proof of Concept Running Red Hat Quay, Clair, and mirroring on a single node, with local image storage and local database Single data center Running highly available Red Hat Quay, Clair ,and mirroring, on multiple nodes, with HA database and image storage Multiple data centers Running highly available Red Hat Quay, Clair, and mirroring, on multiple nodes in multiple data centers, with HA database and image storage 4.2. Red Hat Quay deployment topology The following image provides a high level overview of a Red Hat Quay deployment topology: Red Hat Quay deployment topology In this deployment, all pushes, user interface, and API requests are received by public Red Hat Quay endpoints. Pulls are served directly from object storage . 4.3. Red Hat Quay deployment topology with storage proxy The following image provides a high level overview of a Red Hat Quay deployment topology with storage proxy configured: Red Hat Quay deployment topology with storage proxy With storage proxy configured, all traffic passes through the public Red Hat Quay endpoint.
| null |
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_architecture/sample-quay-on-prem-intro
|
Chapter 4. ClusterCSIDriver [operator.openshift.io/v1]
|
Chapter 4. ClusterCSIDriver [operator.openshift.io/v1] Description ClusterCSIDriver object allows management and configuration of a CSI driver operator installed by default in OpenShift. Name of the object must be name of the CSI driver it operates. See CSIDriverName type for list of allowed values. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 4.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description driverConfig object driverConfig can be used to specify platform specific driver configuration. When omitted, this means no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". storageClassState string StorageClassState determines if CSI operator should create and manage storage classes. If this field value is empty or Managed - CSI operator will continuously reconcile storage class and create if necessary. If this field value is Unmanaged - CSI operator will not reconcile any previously created storage class. If this field value is Removed - CSI operator will delete the storage class it created previously. When omitted, this means the user has no opinion and the platform chooses a reasonable default, which is subject to change over time. The current default behaviour is Managed. unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 4.1.2. .spec.driverConfig Description driverConfig can be used to specify platform specific driver configuration. When omitted, this means no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. Type object Required driverType Property Type Description aws object aws is used to configure the AWS CSI driver. azure object azure is used to configure the Azure CSI driver. driverType string driverType indicates type of CSI driver for which the driverConfig is being applied to. Valid values are: AWS, Azure, GCP, IBMCloud, vSphere and omitted. Consumers should treat unknown values as a NO-OP. gcp object gcp is used to configure the GCP CSI driver. ibmcloud object ibmcloud is used to configure the IBM Cloud CSI driver. vSphere object vsphere is used to configure the vsphere CSI driver. 4.1.3. .spec.driverConfig.aws Description aws is used to configure the AWS CSI driver. Type object Property Type Description kmsKeyARN string kmsKeyARN sets the cluster default storage class to encrypt volumes with a user-defined KMS key, rather than the default KMS key used by AWS. The value may be either the ARN or Alias ARN of a KMS key. 4.1.4. .spec.driverConfig.azure Description azure is used to configure the Azure CSI driver. Type object Property Type Description diskEncryptionSet object diskEncryptionSet sets the cluster default storage class to encrypt volumes with a customer-managed encryption set, rather than the default platform-managed keys. 4.1.5. .spec.driverConfig.azure.diskEncryptionSet Description diskEncryptionSet sets the cluster default storage class to encrypt volumes with a customer-managed encryption set, rather than the default platform-managed keys. Type object Required name resourceGroup subscriptionID Property Type Description name string name is the name of the disk encryption set that will be set on the default storage class. The value should consist of only alphanumberic characters, underscores (_), hyphens, and be at most 80 characters in length. resourceGroup string resourceGroup defines the Azure resource group that contains the disk encryption set. The value should consist of only alphanumberic characters, underscores (_), parentheses, hyphens and periods. The value should not end in a period and be at most 90 characters in length. subscriptionID string subscriptionID defines the Azure subscription that contains the disk encryption set. The value should meet the following conditions: 1. It should be a 128-bit number. 2. It should be 36 characters (32 hexadecimal characters and 4 hyphens) long. 3. It should be displayed in five groups separated by hyphens (-). 4. The first group should be 8 characters long. 5. The second, third, and fourth groups should be 4 characters long. 6. The fifth group should be 12 characters long. An Example SubscrionID: f2007bbf-f802-4a47-9336-cf7c6b89b378 4.1.6. .spec.driverConfig.gcp Description gcp is used to configure the GCP CSI driver. Type object Property Type Description kmsKey object kmsKey sets the cluster default storage class to encrypt volumes with customer-supplied encryption keys, rather than the default keys managed by GCP. 4.1.7. .spec.driverConfig.gcp.kmsKey Description kmsKey sets the cluster default storage class to encrypt volumes with customer-supplied encryption keys, rather than the default keys managed by GCP. Type object Required keyRing name projectID Property Type Description keyRing string keyRing is the name of the KMS Key Ring which the KMS Key belongs to. The value should correspond to an existing KMS key ring and should consist of only alphanumeric characters, hyphens (-) and underscores (_), and be at most 63 characters in length. location string location is the GCP location in which the Key Ring exists. The value must match an existing GCP location, or "global". Defaults to global, if not set. name string name is the name of the customer-managed encryption key to be used for disk encryption. The value should correspond to an existing KMS key and should consist of only alphanumeric characters, hyphens (-) and underscores (_), and be at most 63 characters in length. projectID string projectID is the ID of the Project in which the KMS Key Ring exists. It must be 6 to 30 lowercase letters, digits, or hyphens. It must start with a letter. Trailing hyphens are prohibited. 4.1.8. .spec.driverConfig.ibmcloud Description ibmcloud is used to configure the IBM Cloud CSI driver. Type object Required encryptionKeyCRN Property Type Description encryptionKeyCRN string encryptionKeyCRN is the IBM Cloud CRN of the customer-managed root key to use for disk encryption of volumes for the default storage classes. 4.1.9. .spec.driverConfig.vSphere Description vsphere is used to configure the vsphere CSI driver. Type object Property Type Description globalMaxSnapshotsPerBlockVolume integer globalMaxSnapshotsPerBlockVolume is a global configuration parameter that applies to volumes on all kinds of datastores. If omitted, the platform chooses a default, which is subject to change over time, currently that default is 3. Snapshots can not be disabled using this parameter. Increasing number of snapshots above 3 can have negative impact on performance, for more details see: https://kb.vmware.com/s/article/1025279 Volume snapshot documentation: https://docs.vmware.com/en/VMware-vSphere-Container-Storage-Plug-in/3.0/vmware-vsphere-csp-getting-started/GUID-E0B41C69-7EEB-450F-A73D-5FD2FF39E891.html granularMaxSnapshotsPerBlockVolumeInVSAN integer granularMaxSnapshotsPerBlockVolumeInVSAN is a granular configuration parameter on vSAN datastore only. It overrides GlobalMaxSnapshotsPerBlockVolume if set, while it falls back to the global constraint if unset. Snapshots for VSAN can not be disabled using this parameter. granularMaxSnapshotsPerBlockVolumeInVVOL integer granularMaxSnapshotsPerBlockVolumeInVVOL is a granular configuration parameter on Virtual Volumes datastore only. It overrides GlobalMaxSnapshotsPerBlockVolume if set, while it falls back to the global constraint if unset. Snapshots for VVOL can not be disabled using this parameter. topologyCategories array (string) topologyCategories indicates tag categories with which vcenter resources such as hostcluster or datacenter were tagged with. If cluster Infrastructure object has a topology, values specified in Infrastructure object will be used and modifications to topologyCategories will be rejected. 4.1.10. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 4.1.11. .status.conditions Description conditions is a list of conditions and their status Type array 4.1.12. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 4.1.13. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 4.1.14. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 4.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/clustercsidrivers DELETE : delete collection of ClusterCSIDriver GET : list objects of kind ClusterCSIDriver POST : create a ClusterCSIDriver /apis/operator.openshift.io/v1/clustercsidrivers/{name} DELETE : delete a ClusterCSIDriver GET : read the specified ClusterCSIDriver PATCH : partially update the specified ClusterCSIDriver PUT : replace the specified ClusterCSIDriver /apis/operator.openshift.io/v1/clustercsidrivers/{name}/status GET : read status of the specified ClusterCSIDriver PATCH : partially update status of the specified ClusterCSIDriver PUT : replace status of the specified ClusterCSIDriver 4.2.1. /apis/operator.openshift.io/v1/clustercsidrivers HTTP method DELETE Description delete collection of ClusterCSIDriver Table 4.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterCSIDriver Table 4.2. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriverList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterCSIDriver Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.4. Body parameters Parameter Type Description body ClusterCSIDriver schema Table 4.5. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 201 - Created ClusterCSIDriver schema 202 - Accepted ClusterCSIDriver schema 401 - Unauthorized Empty 4.2.2. /apis/operator.openshift.io/v1/clustercsidrivers/{name} Table 4.6. Global path parameters Parameter Type Description name string name of the ClusterCSIDriver HTTP method DELETE Description delete a ClusterCSIDriver Table 4.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterCSIDriver Table 4.9. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterCSIDriver Table 4.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterCSIDriver Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. Body parameters Parameter Type Description body ClusterCSIDriver schema Table 4.14. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 201 - Created ClusterCSIDriver schema 401 - Unauthorized Empty 4.2.3. /apis/operator.openshift.io/v1/clustercsidrivers/{name}/status Table 4.15. Global path parameters Parameter Type Description name string name of the ClusterCSIDriver HTTP method GET Description read status of the specified ClusterCSIDriver Table 4.16. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterCSIDriver Table 4.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.18. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterCSIDriver Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body ClusterCSIDriver schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 201 - Created ClusterCSIDriver schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operator_apis/clustercsidriver-operator-openshift-io-v1
|
Configuring the Bare Metal Provisioning service
|
Configuring the Bare Metal Provisioning service Red Hat OpenStack Services on OpenShift 18.0 Enabling and configuring the Bare Metal Provisioning service (ironic) for Bare Metal as a Service (BMaaS) OpenStack Documentation Team [email protected]
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_bare_metal_provisioning_service/index
|
Appendix A. Revision History
|
Appendix A. Revision History Revision History Revision 6.9-1 Tue Mar 14 2017 Petr Bokoc Red Hat Enterprise Linux 6.9 General Availability release. Revision 6.8-5 Mon May 02 2016 Petr Bokoc Preparing document for 6.8 GA publication. Revision 6.8-1 Wed Mar 02 2016 Petr Bokoc Preparing document for 6.8 Beta publication. Revision 6.7-6 Wed Aug 05 2015 Laura Bailey Correcting typographical error. Revision 6.7-5 Tue Jul 30 2015 Mark Flitter Updated to include Red Hat Access Insights Revision 6.7-2 Tue Jul 28 2015 Mark Flitter Publishing for RHEL 6.7 Revision 6.7-1 Tue Jul 28 2015 Mark Flitter Publishing for RHEL 6.7 Revision 6.6-2 Fri Apr 17 2015 Laura Bailey Publishing for RHEL 6.7 Beta. Revision 6.6-1 Tue Dec 16 2014 Laura Bailey Updated metadata to aid doc presentation on the portal. Revision 6.2-8 Thu Aug 18 2014 Laura Bailey Combined two separate list items that were intended to be a single point. Revision 6.2-7 Mon Aug 11 2014 Laura Bailey Corrected the name of the replaced sysklogd package, BZ1088684. Revision 6.2-6 Fri Aug 08 2014 Laura Bailey Preparing document for release alongside RHEL 6.6 Beta. Revision 6.2-5 Tue Jul 29 2014 Laura Bailey Updated wording in libnl3/libnl parallel install section, BZ1092776. Revision 6.2-4 Tue Jul 29 2014 Laura Bailey Actually added sections on Preupgrade Assistant BZ1088147 and Red Hat Upgrade Tool BZ1087196. Revision 6.2-3 Mon Jul 28 2014 Laura Bailey Additional details about rsyslog and libnl3. Revision 6.2-1 Wed Jun 11 2014 Laura Bailey Added details about RSA and DSA key generation changes BZ1088154. Confirmed by tmraz. Revision 6.2-0 Wed Jun 04 2014 Laura Bailey Added section on Preupgrade Assistant and Red Hat Upgrade Tool Revision 6.1-92 Wed May 27 2014 Laura Bailey Corrected ABRT section in the System Monitoring and Kernel chapter BZ710098. Revision 6.1-91 Mon May 19 2014 Laura Bailey Added detail to ABRT section in the System Monitoring and Kernel chapter BZ710098. Revision 6.1-89 Fri May 16 2014 Laura Bailey Misc. updates Revision 6.1-83 Wed Sep 18 2013 Laura Bailey Noted LVS-sync interoperability issues between RHEL 6.4 and RHEL 6.5. BZ#1008708 Revision 6.1-81 Thu Sep 05 2013 Laura Bailey Applied final SME feedback for RHEL 6.5. Revision 6.1-80 Tue Aug 13 2013 Laura Bailey Applied SME feedback. Revision 6.1-77 Fri Aug 09 2013 Laura Bailey Applied SME feedback. Revision 6.1-74 Thu Aug 01 2013 Laura Bailey Miscellaneous updates. Revision 6.1-69 Wed Nov 21 2012 Scott Radvan cryptoloop is deprecated. Revision 6.1-68 Sun Oct 14 2012 Scott Radvan Mention Samba 3.6 and link to Release Notes. Revision 6.1-67 Mon Sep 10 2012 Scott Radvan BZ#847907. Modify tape device limit. Revision 6.1-66 Mon Sep 3 2012 Scott Radvan Fix typo reported in BZ#853204 Revision 6.1-65 Mon Sep 3 2012 Scott Radvan Include information about the dracut.conf.d configuration directory Revision 6.1-64 Mon Aug 27 2012 Scott Radvan Fix minor typos. Revision 6.1-63 Mon Aug 27 2012 Scott Radvan add id tags throughout guide sections Revision 6.1-62 Mon Aug 27 2012 Scott Radvan Add change of supported tape drives. Revision 6.1-61 Mon Jun 18 2012 Scott Radvan Publish for 6.3 GA release. Revision 6.1-59 Fri Feb 17 2012 Scott Radvan Drop fusecompress section as raised in BZ#791258. Revision 6.1-58 Mon Jan 16 2012 Scott Radvan Fix minor typographical errors raised in BZ#664683. Revision 6.1-57 Mon Jan 16 2012 Scott Radvan Note that joystick support is not provided in the default kernel. BZ#664683. Revision 6.1-55 Mon Nov 28 2011 Scott Radvan Review for 6.2 release. Revision 6.1-39 Wed May 18 2011 Scott Radvan Review for 6.1 release.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/appe-publican-revision_history
|
Part VI. Appendices
|
Part VI. Appendices
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/appendices
|
8.6. Configure ODBC Options on Microsoft Windows
|
8.6. Configure ODBC Options on Microsoft Windows Prerequisites You must have logged into the workstation with administrative rights. You must have used the Control Panel's Data Sources (ODBC) applet to add a new data source name. Each data source name you configure can only access one VDB within a Teiid System. To make more than one VDB available, you need to configure more than one data source name. From the Start menu, select Settings -> Control Panel . The Control Panel displays. Double-click Administrative Tools . Double-click Data Sources (ODBC) . The ODBC Data Source Administrator applet displays. Click the tab associated with the type of DSN you want to add. The Create New Data Source dialog box displays. In the Select a driver for which you want to set up a data source table, select PostgreSQL Unicode . Click Finish . In the Data Source Name edit box, type the name you want to assign to this data source. In the Database edit box, type the name of the virtual database you want to access through this data source. In the Server edit box, type the host name or IP address of your Teiid runtime. Note If you are connecting via a firewall or NAT address, you must enter either the firewall address or the NAT address. In the Port edit box, type the port number to which the system listens for ODBC requests. (By default, Red Hat JBoss Data Virtualization listens for ODBC requests on port 35432.) In the User Name and Password edit boxes, supply the user name and password for the Teiid runtime access. Leave SSL Mode to disabled. (SSL connections are unsupported at present.) Provide any description about the data source in the Description field. Click on the Datasource button and configure the options. Tick Parse Statements , Recognize Unique Indexes , Maximum , Text as LongVarChar and Bool as Char and set MaxVarChar to 255, Max LongVarChar to 8190, Cache Size to 100 and SysTable Prefixes to dd_:. On the second page, click LF , Server side prepare , default , 7.4+ and set the Extra Opts to 0x0. Click Save . You can optionally click Test to validate your connection if Red Hat JBoss Data Virtualization is running. Table 8.1. Primary ODBC Settings for Red Hat JBoss Data Virtualization Name Description Updateable Cursors and Row Versioning Should not be used. Use serverside prepare and Parse Statements and Disallow Premature It is recommended that Use serverside prepare is enabled and Parse Statements/Disallow Premature are disabled. SSL mode See Security Guide https://access.redhat.com/documentation/en/red-hat-jboss-data-virtualization/6.4/paged/security-guide/ Use Declare/Fetch cursors and Fetch Max Count Should be used to better manage resources when large result sets are used. Logging/debug settings can be utilized as needed. Settings that manipulate datatypes, metadata, or optimizations such as Show SystemTables , True is -1 , Backend genetic optimizer , Bytea as LongVarBinary , Bools as Char are ignored by the server and have no client side effect. Any other setting that does have a client-side effect, such as LF to CR/LF conversion , may be used if desired but there is currently no server-side usage of the setting.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/set_the_odbc_driver_global_options1
|
Chapter 1. Common object reference
|
Chapter 1. Common object reference 1.1. com.coreos.monitoring.v1.AlertmanagerList schema Description AlertmanagerList is a list of Alertmanager Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Alertmanager) List of alertmanagers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.2. com.coreos.monitoring.v1.PodMonitorList schema Description PodMonitorList is a list of PodMonitor Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodMonitor) List of podmonitors. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.3. com.coreos.monitoring.v1.ProbeList schema Description ProbeList is a list of Probe Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Probe) List of probes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.4. com.coreos.monitoring.v1.PrometheusList schema Description PrometheusList is a list of Prometheus Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Prometheus) List of prometheuses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.5. com.coreos.monitoring.v1.PrometheusRuleList schema Description PrometheusRuleList is a list of PrometheusRule Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PrometheusRule) List of prometheusrules. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.6. com.coreos.monitoring.v1.ServiceMonitorList schema Description ServiceMonitorList is a list of ServiceMonitor Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceMonitor) List of servicemonitors. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.7. com.coreos.monitoring.v1.ThanosRulerList schema Description ThanosRulerList is a list of ThanosRuler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ThanosRuler) List of thanosrulers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.8. com.coreos.monitoring.v1beta1.AlertmanagerConfigList schema Description AlertmanagerConfigList is a list of AlertmanagerConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AlertmanagerConfig) List of alertmanagerconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.9. com.coreos.operators.v1.OLMConfigList schema Description OLMConfigList is a list of OLMConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OLMConfig) List of olmconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.10. com.coreos.operators.v1.OperatorGroupList schema Description OperatorGroupList is a list of OperatorGroup Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorGroup) List of operatorgroups. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.11. com.coreos.operators.v1.OperatorList schema Description OperatorList is a list of Operator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Operator) List of operators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.12. com.coreos.operators.v1alpha1.CatalogSourceList schema Description CatalogSourceList is a list of CatalogSource Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CatalogSource) List of catalogsources. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.13. com.coreos.operators.v1alpha1.ClusterServiceVersionList schema Description ClusterServiceVersionList is a list of ClusterServiceVersion Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterServiceVersion) List of clusterserviceversions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.14. com.coreos.operators.v1alpha1.InstallPlanList schema Description InstallPlanList is a list of InstallPlan Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (InstallPlan) List of installplans. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.15. com.coreos.operators.v1alpha1.SubscriptionList schema Description SubscriptionList is a list of Subscription Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Subscription) List of subscriptions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.16. com.coreos.operators.v2.OperatorConditionList schema Description OperatorConditionList is a list of OperatorCondition Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorCondition) List of operatorconditions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.17. com.github.openshift.api.apps.v1.DeploymentConfigList schema Description DeploymentConfigList is a collection of deployment configs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DeploymentConfig) Items is a list of deployment configs kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.18. com.github.openshift.api.authorization.v1.ClusterRoleBindingList schema Description ClusterRoleBindingList is a collection of ClusterRoleBindings Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRoleBinding) Items is a list of ClusterRoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.19. com.github.openshift.api.authorization.v1.ClusterRoleList schema Description ClusterRoleList is a collection of ClusterRoles Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRole) Items is a list of ClusterRoles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.20. com.github.openshift.api.authorization.v1.RoleBindingList schema Description RoleBindingList is a collection of RoleBindings Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBinding) Items is a list of RoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.21. com.github.openshift.api.authorization.v1.RoleList schema Description RoleList is a collection of Roles Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Role) Items is a list of Roles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.22. com.github.openshift.api.build.v1.BuildConfigList schema Description BuildConfigList is a collection of BuildConfigs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BuildConfig) items is a list of build configs kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.23. com.github.openshift.api.build.v1.BuildList schema Description BuildList is a collection of Builds. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Build) items is a list of builds kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.24. com.github.openshift.api.image.v1.ImageList schema Description ImageList is a list of Image objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Image) Items is a list of images kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.25. com.github.openshift.api.image.v1.ImageStreamList schema Description ImageStreamList is a list of ImageStream objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageStream) Items is a list of imageStreams kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.26. com.github.openshift.api.image.v1.ImageStreamTagList schema Description ImageStreamTagList is a list of ImageStreamTag objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageStreamTag) Items is the list of image stream tags kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.27. com.github.openshift.api.image.v1.ImageTagList schema Description ImageTagList is a list of ImageTag objects. When listing image tags, the image field is not populated. Tags are returned in alphabetical order by image stream and then tag. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageTag) Items is the list of image stream tags kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.28. com.github.openshift.api.oauth.v1.OAuthAccessTokenList schema Description OAuthAccessTokenList is a collection of OAuth access tokens Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthAccessToken) Items is the list of OAuth access tokens kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.29. com.github.openshift.api.oauth.v1.OAuthAuthorizeTokenList schema Description OAuthAuthorizeTokenList is a collection of OAuth authorization tokens Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthAuthorizeToken) Items is the list of OAuth authorization tokens kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.30. com.github.openshift.api.oauth.v1.OAuthClientAuthorizationList schema Description OAuthClientAuthorizationList is a collection of OAuth client authorizations Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthClientAuthorization) Items is the list of OAuth client authorizations kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.31. com.github.openshift.api.oauth.v1.OAuthClientList schema Description OAuthClientList is a collection of OAuth clients Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthClient) Items is the list of OAuth clients kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.32. com.github.openshift.api.oauth.v1.UserOAuthAccessTokenList schema Description UserOAuthAccessTokenList is a collection of access tokens issued on behalf of the requesting user Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (UserOAuthAccessToken) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.33. com.github.openshift.api.project.v1.ProjectList schema Description ProjectList is a list of Project objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Project) Items is the list of projects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.34. com.github.openshift.api.quota.v1.AppliedClusterResourceQuotaList schema Description AppliedClusterResourceQuotaList is a collection of AppliedClusterResourceQuotas Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AppliedClusterResourceQuota) Items is a list of AppliedClusterResourceQuota kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.35. com.github.openshift.api.route.v1.RouteList schema Description RouteList is a collection of Routes. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Route) items is a list of routes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.36. com.github.openshift.api.security.v1.RangeAllocationList schema Description RangeAllocationList is a list of RangeAllocations objects Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RangeAllocation) List of RangeAllocations. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.37. com.github.openshift.api.template.v1.BrokerTemplateInstanceList schema Description BrokerTemplateInstanceList is a list of BrokerTemplateInstance objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BrokerTemplateInstance) items is a list of BrokerTemplateInstances kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.38. com.github.openshift.api.template.v1.TemplateInstanceList schema Description TemplateInstanceList is a list of TemplateInstance objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (TemplateInstance) items is a list of Templateinstances kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.39. com.github.openshift.api.template.v1.TemplateList schema Description TemplateList is a list of Template objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Template) Items is a list of templates kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.40. com.github.openshift.api.user.v1.GroupList schema Description GroupList is a collection of Groups Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Group) Items is the list of groups kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.41. com.github.openshift.api.user.v1.IdentityList schema Description IdentityList is a collection of Identities Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Identity) Items is the list of identities kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.42. com.github.openshift.api.user.v1.UserList schema Description UserList is a collection of Users Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (User) Items is the list of users kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.43. com.github.operator-framework.api.pkg.lib.version.OperatorVersion schema Description OperatorVersion is a wrapper around semver.Version which supports correct marshaling to YAML and JSON. Type string 1.44. com.github.operator-framework.api.pkg.operators.v1alpha1.APIServiceDefinitions schema Description APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. Type object Schema Property Type Description owned array (APIServiceDescription) required array (APIServiceDescription) 1.45. com.github.operator-framework.api.pkg.operators.v1alpha1.CustomResourceDefinitions schema Description CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. If the CRD is present in the Owned list, it is implicitly required. Type object Schema Property Type Description owned array (CRDDescription) required array (CRDDescription) 1.46. com.github.operator-framework.api.pkg.operators.v1alpha1.InstallMode schema Description InstallMode associates an InstallModeType with a flag representing if the CSV supports it Type object Required type supported Schema Property Type Description supported boolean type string 1.47. com.github.operator-framework.operator-lifecycle-manager.pkg.package-server.apis.operators.v1.PackageManifestList schema Description PackageManifestList is a list of PackageManifest objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PackageManifest) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.48. io.cncf.cni.k8s.v1.NetworkAttachmentDefinitionList schema Description NetworkAttachmentDefinitionList is a list of NetworkAttachmentDefinition Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (NetworkAttachmentDefinition) List of network-attachment-definitions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.49. io.cncf.cni.k8s.v1beta1.MultiNetworkPolicyList schema Description MultiNetworkPolicyList is a list of MultiNetworkPolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MultiNetworkPolicy) List of multi-networkpolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.50. io.cncf.cni.whereabouts.v1alpha1.IPPoolList schema Description IPPoolList is a list of IPPool Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IPPool) List of ippools. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.51. io.cncf.cni.whereabouts.v1alpha1.OverlappingRangeIPReservationList schema Description OverlappingRangeIPReservationList is a list of OverlappingRangeIPReservation Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OverlappingRangeIPReservation) List of overlappingrangeipreservations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.52. io.k8s.api.admissionregistration.v1.MutatingWebhookConfigurationList schema Description MutatingWebhookConfigurationList is a list of MutatingWebhookConfiguration. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MutatingWebhookConfiguration) List of MutatingWebhookConfiguration. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.53. io.k8s.api.admissionregistration.v1.ValidatingAdmissionPolicyBindingList schema Description ValidatingAdmissionPolicyBindingList is a list of ValidatingAdmissionPolicyBinding. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ValidatingAdmissionPolicyBinding) List of PolicyBinding. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.54. io.k8s.api.admissionregistration.v1.ValidatingAdmissionPolicyList schema Description ValidatingAdmissionPolicyList is a list of ValidatingAdmissionPolicy. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ValidatingAdmissionPolicy) List of ValidatingAdmissionPolicy. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.55. io.k8s.api.admissionregistration.v1.ValidatingWebhookConfigurationList schema Description ValidatingWebhookConfigurationList is a list of ValidatingWebhookConfiguration. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ValidatingWebhookConfiguration) List of ValidatingWebhookConfiguration. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.56. io.k8s.api.apps.v1.ControllerRevisionList schema Description ControllerRevisionList is a resource containing a list of ControllerRevision objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControllerRevision) Items is the list of ControllerRevisions kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.57. io.k8s.api.apps.v1.DaemonSetList schema Description DaemonSetList is a collection of daemon sets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DaemonSet) A list of daemon sets. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.58. io.k8s.api.apps.v1.DeploymentList schema Description DeploymentList is a list of Deployments. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Deployment) Items is the list of Deployments. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 1.59. io.k8s.api.apps.v1.ReplicaSetList schema Description ReplicaSetList is a collection of ReplicaSets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ReplicaSet) List of ReplicaSets. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.60. io.k8s.api.apps.v1.StatefulSetList schema Description StatefulSetList is a collection of StatefulSets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StatefulSet) Items is the list of stateful sets. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.61. io.k8s.api.autoscaling.v2.HorizontalPodAutoscalerList schema Description HorizontalPodAutoscalerList is a list of horizontal pod autoscaler objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HorizontalPodAutoscaler) items is the list of horizontal pod autoscaler objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list metadata. 1.62. io.k8s.api.batch.v1.CronJobList schema Description CronJobList is a collection of cron jobs. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CronJob) items is the list of CronJobs. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.63. io.k8s.api.batch.v1.JobList schema Description JobList is a collection of jobs. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Job) items is the list of Jobs. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.64. io.k8s.api.certificates.v1.CertificateSigningRequestList schema Description CertificateSigningRequestList is a collection of CertificateSigningRequest objects Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CertificateSigningRequest) items is a collection of CertificateSigningRequest objects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.65. io.k8s.api.coordination.v1.LeaseList schema Description LeaseList is a list of Lease objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Lease) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.66. io.k8s.api.core.v1.ComponentStatusList schema Description Status of all the conditions for the component as a list of ComponentStatus objects. Deprecated: This API is deprecated in v1.19+ Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ComponentStatus) List of ComponentStatus objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.67. io.k8s.api.core.v1.ConfigMapList schema Description ConfigMapList is a resource containing a list of ConfigMap objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConfigMap) Items is the list of ConfigMaps. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.68. io.k8s.api.core.v1.ConfigMapVolumeSource_v2 schema Description Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array (KeyToPath) items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 1.69. io.k8s.api.core.v1.CSIVolumeSource schema Description Represents a source location of a volume to mount, managed by an external CSI driver Type object Required driver Schema Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef LocalObjectReference nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 1.70. io.k8s.api.core.v1.EndpointsList schema Description EndpointsList is a list of endpoints. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Endpoints) List of endpoints. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.71. io.k8s.api.core.v1.EnvVar schema Description EnvVar represents an environment variable present in a Container. Type object Required name Schema Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom EnvVarSource Source for the environment variable's value. Cannot be used if value is not empty. 1.72. io.k8s.api.core.v1.EventList schema Description EventList is a list of events. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Event) List of events kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.73. io.k8s.api.core.v1.EventSource schema Description EventSource contains information for an event. Type object Schema Property Type Description component string Component from which the event is generated. host string Node name on which the event is generated. 1.74. io.k8s.api.core.v1.LimitRangeList schema Description LimitRangeList is a list of LimitRange items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (LimitRange) Items is a list of LimitRange objects. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.75. io.k8s.api.core.v1.LocalObjectReference_v2 schema Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Schema Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 1.76. io.k8s.api.core.v1.NamespaceCondition schema Description NamespaceCondition contains details about state of namespace. Type object Required type status Schema Property Type Description lastTransitionTime Time message string reason string status string Status of the condition, one of True, False, Unknown. type string Type of namespace controller condition. 1.77. io.k8s.api.core.v1.NamespaceList schema Description NamespaceList is a list of Namespaces. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Namespace) Items is the list of Namespace objects in the list. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.78. io.k8s.api.core.v1.NodeList schema Description NodeList is the whole list of all Nodes which have been registered with master. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Node) List of nodes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.79. io.k8s.api.core.v1.ObjectReference schema Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Schema Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 1.80. io.k8s.api.core.v1.PersistentVolumeClaim schema Description PersistentVolumeClaim is a user's request for and claim to a persistent volume Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes status object PersistentVolumeClaimStatus is the current status of a persistent volume claim. ..spec Description:: + PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object VolumeResourceRequirements describes the storage resource requirements for a volume. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeAttributesClassName string volumeAttributesClassName may be used to set the VolumeAttributesClass used by this claim. If specified, the CSI driver will create or update the volume with the attributes defined in the corresponding VolumeAttributesClass. This has a different purpose than storageClassName, it can be changed after the claim is created. An empty string value means that no VolumeAttributesClass will be applied to the claim but it's not allowed to reset this field to empty string once it is set. If unspecified and the PersistentVolumeClaim is unbound, the default VolumeAttributesClass will be set by the persistentvolume controller if it exists. If the resource referred to by volumeAttributesClass does not exist, this PersistentVolumeClaim will be set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. ..spec.dataSource Description:: + TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced ..spec.dataSourceRef Description:: + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. ..spec.resources Description:: + VolumeResourceRequirements describes the storage resource requirements for a volume. Type object Property Type Description limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ ..status Description:: + PersistentVolumeClaimStatus is the current status of a persistent volume claim. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources object (Quantity) allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity object (Quantity) capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc currentVolumeAttributesClassName string currentVolumeAttributesClassName is the current name of the VolumeAttributesClass the PVC is using. When unset, there is no VolumeAttributeClass applied to this PersistentVolumeClaim This is an alpha field and requires enabling VolumeAttributesClass feature. modifyVolumeStatus object ModifyVolumeStatus represents the status object of ControllerModifyVolume operation phase string phase represents the current phase of PersistentVolumeClaim. Possible enum values: - "Bound" used for PersistentVolumeClaims that are bound - "Lost" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - "Pending" used for PersistentVolumeClaims that are not yet bound ..status.conditions Description:: + conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'Resizing'. Type array ..status.conditions[] Description:: + PersistentVolumeClaimCondition contains details about state of pvc Type object Required type status Property Type Description lastProbeTime Time lastProbeTime is the time we probed the condition. lastTransitionTime Time lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "Resizing" that means the underlying persistent volume is being resized. status string type string ..status.modifyVolumeStatus Description:: + ModifyVolumeStatus represents the status object of ControllerModifyVolume operation Type object Required status Property Type Description status string status is the status of the ControllerModifyVolume operation. It can be in any of following states: - Pending Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing. - InProgress InProgress indicates that the volume is being modified. - Infeasible Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified. Note: New statuses can be added in the future. Consumers should check for unknown statuses and fail appropriately. Possible enum values: - "InProgress" InProgress indicates that the volume is being modified - "Infeasible" Infeasible indicates that the request has been rejected as invalid by the CSI driver. To resolve the error, a valid VolumeAttributesClass needs to be specified - "Pending" Pending indicates that the PersistentVolumeClaim cannot be modified due to unmet requirements, such as the specified VolumeAttributesClass not existing targetVolumeAttributesClassName string targetVolumeAttributesClassName is the name of the VolumeAttributesClass the PVC currently being reconciled 1.81. io.k8s.api.core.v1.PersistentVolumeClaimList schema Description PersistentVolumeClaimList is a list of PersistentVolumeClaim items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PersistentVolumeClaim) items is a list of persistent volume claims. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.82. io.k8s.api.core.v1.PersistentVolumeList schema Description PersistentVolumeList is a list of PersistentVolume items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PersistentVolume) items is a list of persistent volumes. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.83. io.k8s.api.core.v1.PersistentVolumeSpec schema Description PersistentVolumeSpec is the specification of a persistent volume. Type object Schema Property Type Description accessModes array (string) accessModes contains all ways the volume can be mounted. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes awsElasticBlockStore AWSElasticBlockStoreVolumeSource awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk AzureDiskVolumeSource azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile AzureFilePersistentVolumeSource azureFile represents an Azure File Service mount on the host and bind mount to the pod. capacity object (Quantity) capacity is the description of the persistent volume's resources and capacity. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity cephfs CephFSPersistentVolumeSource cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder CinderPersistentVolumeSource cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md claimRef ObjectReference claimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding csi CSIPersistentVolumeSource csi represents storage that is handled by an external CSI driver (Beta feature). fc FCVolumeSource fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume FlexPersistentVolumeSource flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker FlockerVolumeSource flocker represents a Flocker volume attached to a kubelet's host machine and exposed to the pod for its usage. This depends on the Flocker control service being running gcePersistentDisk GCEPersistentDiskVolumeSource gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk glusterfs GlusterfsPersistentVolumeSource glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod. Provisioned by an admin. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath HostPathVolumeSource hostPath represents a directory on the host. Provisioned by a developer or tester. This is useful for single-node development and testing only! On-host storage is not supported in any way and WILL NOT WORK in a multi-node cluster. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi ISCSIPersistentVolumeSource iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. local LocalVolumeSource local represents directly-attached storage with node affinity mountOptions array (string) mountOptions is the list of mount options, e.g. ["ro", "soft"]. Not validated - mount will simply fail if one is invalid. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options nfs NFSVolumeSource nfs represents an NFS mount on the host. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs nodeAffinity VolumeNodeAffinity nodeAffinity defines constraints that limit what nodes this volume can be accessed from. This field influences the scheduling of pods that use this volume. persistentVolumeReclaimPolicy string persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim. Valid options are Retain (default for manually created PersistentVolumes), Delete (default for dynamically provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be supported by the volume plugin underlying this PersistentVolume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming Possible enum values: - "Delete" means the volume will be deleted from Kubernetes on release from its claim. The volume plugin must support Deletion. - "Recycle" means the volume will be recycled back into the pool of unbound persistent volumes on release from its claim. The volume plugin must support Recycling. - "Retain" means the volume will be left in its current phase (Released) for manual reclamation by the administrator. The default policy is Retain. photonPersistentDisk PhotonPersistentDiskVolumeSource photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume PortworxVolumeSource portworxVolume represents a portworx volume attached and mounted on kubelets host machine quobyte QuobyteVolumeSource quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd RBDPersistentVolumeSource rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO ScaleIOPersistentVolumeSource scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. storageClassName string storageClassName is the name of StorageClass to which this persistent volume belongs. Empty value means that this volume does not belong to any StorageClass. storageos StorageOSPersistentVolumeSource storageOS represents a StorageOS volume that is attached to the kubelet's host machine and mounted into the pod More info: https://examples.k8s.io/volumes/storageos/README.md volumeAttributesClassName string Name of VolumeAttributesClass to which this persistent volume belongs. Empty value is not allowed. When this field is not set, it indicates that this volume does not belong to any VolumeAttributesClass. This field is mutable and can be changed by the CSI driver after a volume has been updated successfully to a new class. For an unbound PersistentVolume, the volumeAttributesClassName will be matched with unbound PersistentVolumeClaims during the binding process. This is an alpha field and requires enabling VolumeAttributesClass feature. volumeMode string volumeMode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. Value of Filesystem is implied when not included in spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. vsphereVolume VsphereVirtualDiskVolumeSource vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 1.84. io.k8s.api.core.v1.PodList schema Description PodList is a list of Pods. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Pod) List of pods. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.85. io.k8s.api.core.v1.PodTemplateList schema Description PodTemplateList is a list of PodTemplates. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodTemplate) List of pod templates kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.86. io.k8s.api.core.v1.PodTemplateSpec schema Description PodTemplateSpec describes the data a pod should have when created from a template Type object Schema Property Type Description metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec PodSpec Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.87. io.k8s.api.core.v1.ReplicationControllerList schema Description ReplicationControllerList is a collection of replication controllers. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ReplicationController) List of replication controllers. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.88. io.k8s.api.core.v1.ResourceQuotaList schema Description ResourceQuotaList is a list of ResourceQuota items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ResourceQuota) Items is a list of ResourceQuota objects. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.89. io.k8s.api.core.v1.ResourceQuotaSpec_v2 schema Description ResourceQuotaSpec defines the desired hard limits to enforce for Quota. Type object Schema Property Type Description hard object (Quantity) hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ scopeSelector ScopeSelector_v2 scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. scopes array (string) A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects. 1.90. io.k8s.api.core.v1.ResourceQuotaStatus schema Description ResourceQuotaStatus defines the enforced hard limits and observed use. Type object Schema Property Type Description hard object (Quantity) Hard is the set of enforced hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ used object (Quantity) Used is the current observed total usage of the resource in the namespace. 1.91. io.k8s.api.core.v1.ResourceRequirements schema Description ResourceRequirements describes the compute resource requirements. Type object Schema Property Type Description claims array (ResourceClaim) Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 1.92. io.k8s.api.core.v1.Secret schema Description Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data object (string) Data contains the secret data. Each key must consist of alphanumeric characters, '-', '_' or '.'. The serialized form of the secret data is a base64 encoded string, representing the arbitrary (possibly non-string) data value here. Described in https://tools.ietf.org/html/rfc4648#section-4 immutable boolean Immutable, if set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Defaulted to nil. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata stringData object (string) stringData allows specifying non-binary secret data in string form. It is provided as a write-only input field for convenience. All keys and values are merged into the data field on write, overwriting any existing values. The stringData field is never output when reading from the API. type string Used to facilitate programmatic handling of secret data. More info: https://kubernetes.io/docs/concepts/configuration/secret/#secret-types 1.93. io.k8s.api.core.v1.SecretList schema Description SecretList is a list of Secret. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Secret) Items is a list of secret objects. More info: https://kubernetes.io/docs/concepts/configuration/secret kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.94. io.k8s.api.core.v1.SecretVolumeSource_v2 schema Description Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array (KeyToPath) items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 1.95. io.k8s.api.core.v1.ServiceAccountList schema Description ServiceAccountList is a list of ServiceAccount objects Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceAccount) List of ServiceAccounts. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.96. io.k8s.api.core.v1.ServiceList schema Description ServiceList holds a list of services. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Service) List of services kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.97. io.k8s.api.core.v1.Toleration schema Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Schema Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - "Equal" - "Exists" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 1.98. io.k8s.api.core.v1.TopologySelectorTerm schema Description A topology selector term represents the result of label queries. A null or empty topology selector term matches no objects. The requirements of them are ANDed. It provides a subset of functionality as NodeSelectorTerm. This is an alpha feature and may change in the future. Type object Schema Property Type Description matchLabelExpressions array (TopologySelectorLabelRequirement) A list of topology selector requirements by labels. 1.99. io.k8s.api.core.v1.TypedLocalObjectReference schema Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Schema Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 1.100. io.k8s.api.discovery.v1.EndpointSliceList schema Description EndpointSliceList represents a list of endpoint slices Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EndpointSlice) items is the list of endpoint slices kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 1.101. io.k8s.api.events.v1.EventList schema Description EventList is a list of Event objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Event) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.102. io.k8s.api.flowcontrol.v1.FlowSchemaList schema Description FlowSchemaList is a list of FlowSchema objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FlowSchema) items is a list of FlowSchemas. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.103. io.k8s.api.flowcontrol.v1.PriorityLevelConfigurationList schema Description PriorityLevelConfigurationList is a list of PriorityLevelConfiguration objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PriorityLevelConfiguration) items is a list of request-priorities. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.104. io.k8s.api.networking.v1.IngressClassList schema Description IngressClassList is a collection of IngressClasses. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IngressClass) items is the list of IngressClasses. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 1.105. io.k8s.api.networking.v1.IngressList schema Description IngressList is a collection of Ingress. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Ingress) items is the list of Ingress. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.106. io.k8s.api.networking.v1.NetworkPolicyList schema Description NetworkPolicyList is a list of NetworkPolicy objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (NetworkPolicy) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.107. io.k8s.api.node.v1.RuntimeClassList schema Description RuntimeClassList is a list of RuntimeClass objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RuntimeClass) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.108. io.k8s.api.policy.v1.PodDisruptionBudgetList schema Description PodDisruptionBudgetList is a collection of PodDisruptionBudgets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodDisruptionBudget) Items is a list of PodDisruptionBudgets kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.109. io.k8s.api.rbac.v1.AggregationRule_v2 schema Description AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole Type object Schema Property Type Description clusterRoleSelectors array (LabelSelector_v3) ClusterRoleSelectors holds a list of selectors which will be used to find ClusterRoles and create the rules. If any of the selectors match, then the ClusterRole's permissions will be added 1.110. io.k8s.api.rbac.v1.ClusterRoleBindingList schema Description ClusterRoleBindingList is a collection of ClusterRoleBindings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRoleBinding) Items is a list of ClusterRoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.111. io.k8s.api.rbac.v1.ClusterRoleList schema Description ClusterRoleList is a collection of ClusterRoles Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRole) Items is a list of ClusterRoles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.112. io.k8s.api.rbac.v1.RoleBindingList schema Description RoleBindingList is a collection of RoleBindings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBinding) Items is a list of RoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.113. io.k8s.api.rbac.v1.RoleList schema Description RoleList is a collection of Roles Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Role) Items is a list of Roles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.114. io.k8s.api.scheduling.v1.PriorityClassList schema Description PriorityClassList is a collection of priority classes. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PriorityClass) items is the list of PriorityClasses kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.115. io.k8s.api.storage.v1.CSIDriverList schema Description CSIDriverList is a collection of CSIDriver objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSIDriver) items is the list of CSIDriver kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.116. io.k8s.api.storage.v1.CSINodeList schema Description CSINodeList is a collection of CSINode objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSINode) items is the list of CSINode kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.117. io.k8s.api.storage.v1.CSIStorageCapacityList schema Description CSIStorageCapacityList is a collection of CSIStorageCapacity objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSIStorageCapacity) items is the list of CSIStorageCapacity objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.118. io.k8s.api.storage.v1.StorageClassList schema Description StorageClassList is a collection of storage classes. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageClass) items is the list of StorageClasses kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.119. io.k8s.api.storage.v1.VolumeAttachmentList schema Description VolumeAttachmentList is a collection of VolumeAttachment objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeAttachment) items is the list of VolumeAttachments kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.120. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionList schema Description CustomResourceDefinitionList is a list of CustomResourceDefinition objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CustomResourceDefinition) items list individual CustomResourceDefinition objects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.121. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.JSONSchemaProps schema Description JSONSchemaProps is a JSON-Schema following Specification Draft 4 ( http://json-schema.org/ ). Type object Schema Property Type Description USDref string USDschema string additionalItems `` additionalProperties `` allOf array (undefined) anyOf array (undefined) default JSON default is a default value for undefined object fields. Defaulting is a beta feature under the CustomResourceDefaulting feature gate. Defaulting requires spec.preserveUnknownFields to be false. definitions object (undefined) dependencies object (undefined) description string enum array (JSON) example JSON exclusiveMaximum boolean exclusiveMinimum boolean externalDocs ExternalDocumentation format string format is an OpenAPI v3 format string. Unknown formats are ignored. The following formats are validated: - bsonobjectid: a bson object ID, i.e. a 24 characters hex string - uri: an URI as parsed by Golang net/url.ParseRequestURI - email: an email address as parsed by Golang net/mail.ParseAddress - hostname: a valid representation for an Internet host name, as defined by RFC 1034, section 3.1 [RFC1034]. - ipv4: an IPv4 IP as parsed by Golang net.ParseIP - ipv6: an IPv6 IP as parsed by Golang net.ParseIP - cidr: a CIDR as parsed by Golang net.ParseCIDR - mac: a MAC address as parsed by Golang net.ParseMAC - uuid: an UUID that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}USD - uuid3: an UUID3 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}USD - uuid4: an UUID4 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?4[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}USD - uuid5: an UUID5 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}USD - isbn: an ISBN10 or ISBN13 number string like "0321751043" or "978-0321751041" - isbn10: an ISBN10 number string like "0321751043" - isbn13: an ISBN13 number string like "978-0321751041" - creditcard: a credit card number defined by the regex ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})USD with any non digit characters mixed in - ssn: a U.S. social security number following the regex ^\d{3}[- ]?\d{2}[- ]?\d{4}USD - hexcolor: an hexadecimal color code like " FFFFFF: following the regex ^ ?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})USD - rgbcolor: an RGB color code like rgb like "rgb(255,255,2559" - byte: base64 encoded binary data - password: any kind of string - date: a date string like "2006-01-02" as defined by full-date in RFC3339 - duration: a duration string like "22 ns" as parsed by Golang time.ParseDuration or compatible with Scala duration format - datetime: a date time string like "2014-12-15T19:30:20.000Z" as defined by date-time in RFC3339. id string items `` maxItems integer maxLength integer maxProperties integer maximum number minItems integer minLength integer minProperties integer minimum number multipleOf number not `` nullable boolean oneOf array (undefined) pattern string patternProperties object (undefined) properties object (undefined) required array (string) title string type string uniqueItems boolean x-kubernetes-embedded-resource boolean x-kubernetes-embedded-resource defines that the value is an embedded Kubernetes runtime.Object, with TypeMeta and ObjectMeta. The type must be object. It is allowed to further restrict the embedded object. kind, apiVersion and metadata are validated automatically. x-kubernetes-preserve-unknown-fields is allowed to be true, but does not have to be if the object is fully specified (up to kind, apiVersion, metadata). x-kubernetes-int-or-string boolean x-kubernetes-int-or-string specifies that this value is either an integer or a string. If this is true, an empty type is allowed and type as child of anyOf is permitted if following one of the following patterns: 1) anyOf: - type: integer - type: string 2) allOf: - anyOf: - type: integer - type: string - ... zero or more x-kubernetes-list-map-keys array (string) x-kubernetes-list-map-keys annotates an array with the x-kubernetes-list-type map by specifying the keys used as the index of the map. This tag MUST only be used on lists that have the "x-kubernetes-list-type" extension set to "map". Also, the values specified for this attribute must be a scalar typed field of the child structure (no nesting is supported). The properties specified must either be required or have a default value, to ensure those properties are present for all list items. x-kubernetes-list-type string x-kubernetes-list-type annotates an array to further describe its topology. This extension must only be used on lists and may have 3 possible values: 1) atomic : the list is treated as a single entity, like a scalar. Atomic lists will be entirely replaced when updated. This extension may be used on any type of list (struct, scalar, ... ). 2) set : Sets are lists that must not have multiple items with the same value. Each value must be a scalar, an object with x-kubernetes-map-type atomic or an array with x-kubernetes-list-type atomic . 3) map : These lists are like maps in that their elements have a non-index key used to identify them. Order is preserved upon merge. The map tag must only be used on a list with elements of type object. Defaults to atomic for arrays. x-kubernetes-map-type string x-kubernetes-map-type annotates an object to further describe its topology. This extension must only be used when type is object and may have 2 possible values: 1) granular : These maps are actual maps (key-value pairs) and each fields are independent from each other (they can each be manipulated by separate actors). This is the default behaviour for all maps. 2) atomic : the list is treated as a single entity, like a scalar. Atomic maps will be entirely replaced when updated. x-kubernetes-preserve-unknown-fields boolean x-kubernetes-preserve-unknown-fields stops the API server decoding step from pruning fields which are not specified in the validation schema. This affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden. x-kubernetes-validations array (ValidationRule) x-kubernetes-validations describes a list of validation rules written in the CEL expression language. This field is an alpha-level. Using this field requires the feature gate CustomResourceValidationExpressions to be enabled. 1.122. io.k8s.apimachinery.pkg.api.resource.Quantity schema Description Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors. The serialization format is: <digit> ::= 0 \| 1 \| ... \| 9 <digits> ::= <digit> \| <digit><digits> <number> ::= <digits> \| <digits>.<digits> \| <digits>. \| .<digits> <sign> ::= "+" \| "-" <signedNumber> ::= <number> \| <sign><number> <suffix> ::= <binarySI> \| <decimalExponent> \| <decimalSI> <binarySI> ::= Ki \| Mi \| Gi \| Ti \| Pi \| Ei <decimalSI> ::= m \| "" \| k \| M \| G \| T \| P \| E <decimalExponent> ::= "e" <signedNumber> \| "E" <signedNumber> No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in "canonical form". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: No precision is lost - No fractional digits will be emitted - The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as "1500m" - 1.5Gi will be serialized as "1536Mi" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. Type string 1.123. io.k8s.apimachinery.pkg.apis.meta.v1.Condition schema Description Condition contains details for one aspect of the current state of this API Resource. Type object Required type status lastTransitionTime reason message Schema Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 1.124. io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions schema Description DeleteOptions may be provided when deleting an API object. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dryRun array (string) When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. preconditions Preconditions Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be returned. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. 1.125. io.k8s.apimachinery.pkg.apis.meta.v1.Duration schema Description Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json. Type string 1.126. io.k8s.apimachinery.pkg.apis.meta.v1.GroupVersionKind schema Description GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling Type object Required group version kind Schema Property Type Description group string kind string version string 1.127. io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector schema Description A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Type object Schema Property Type Description matchExpressions array (LabelSelectorRequirement) matchExpressions is a list of label selector requirements. The requirements are ANDed. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 1.128. io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector_v4 schema Description A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Type object Schema Property Type Description matchExpressions array (LabelSelectorRequirement_v2) matchExpressions is a list of label selector requirements. The requirements are ANDed. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 1.129. io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta schema Description ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of {ObjectMeta, ListMeta}. Type object Schema Property Type Description continue string continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message. remainingItemCount integer remainingItemCount is the number of subsequent items in the list which are not included in this list response. If the list request contained label or field selectors, then the number of remaining items is unknown and the field will be left unset and omitted during serialization. If the list is complete (either because it is not chunking or because this is the last chunk), then there are no more remaining items and this field will be left unset and omitted during serialization. Servers older than v1.15 do not set this field. The intended use of the remainingItemCount is estimating the size of a collection. Clients should not rely on the remainingItemCount to be set or to be exact. resourceVersion string String that identifies the server's internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency selfLink string Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. 1.130. io.k8s.apimachinery.pkg.apis.meta.v1.MicroTime schema Description MicroTime is version of Time with microsecond level precision. Type string 1.131. io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta schema Description ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create. Type object Schema Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations creationTimestamp Time CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata deletionGracePeriodSeconds integer Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. deletionTimestamp Time DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested. Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata finalizers array (string) Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will return a 409. Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency generation integer A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels managedFields array (ManagedFieldsEntry) ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object. name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names namespace string Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces ownerReferences array (OwnerReference) List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. resourceVersion string An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency selfLink string Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. uid string UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations. Populated by the system. Read-only. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 1.132. io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta_v2 schema Description ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create. Type object Schema Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations creationTimestamp Time CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata deletionGracePeriodSeconds integer Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. deletionTimestamp Time DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested. Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata finalizers array (string) Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will return a 409. Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency generation integer A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels managedFields array (ManagedFieldsEntry) ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object. name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names namespace string Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces ownerReferences array (OwnerReference) List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. resourceVersion string An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency selfLink string Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. uid string UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations. Populated by the system. Read-only. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 1.133. io.k8s.apimachinery.pkg.apis.meta.v1.Status schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.134. io.k8s.apimachinery.pkg.apis.meta.v1.Status_v10 schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails_v2 Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.135. io.k8s.apimachinery.pkg.apis.meta.v1.Status_v11 schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails_v2 Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.136. io.k8s.apimachinery.pkg.apis.meta.v1.Status_v2 schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails_v2 Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.137. io.k8s.apimachinery.pkg.apis.meta.v1.Status_v3 schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails_v2 Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.138. io.k8s.apimachinery.pkg.apis.meta.v1.Status_v4 schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails_v2 Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.139. io.k8s.apimachinery.pkg.apis.meta.v1.Status_v5 schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails_v2 Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.140. io.k8s.apimachinery.pkg.apis.meta.v1.Status_v6 schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails_v2 Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.141. io.k8s.apimachinery.pkg.apis.meta.v1.Status_v7 schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails_v2 Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.142. io.k8s.apimachinery.pkg.apis.meta.v1.Status_v8 schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails_v2 Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.143. io.k8s.apimachinery.pkg.apis.meta.v1.Status_v9 schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails_v2 Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.144. io.k8s.apimachinery.pkg.apis.meta.v1.Time schema Description Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers. Type string 1.145. io.k8s.apimachinery.pkg.apis.meta.v1.WatchEvent schema Description Event represents a single event to a watched resource. Type object Required type object Schema Property Type Description object RawExtension Object is: * If Type is Added or Modified: the new state of the object. * If Type is Deleted: the state of the object immediately before deletion. * If Type is Error: *Status is recommended; other types may make sense depending on context. type string 1.146. io.k8s.apimachinery.pkg.runtime.RawExtension schema Description RawExtension is used to hold extensions in external versions. To use this, make a field which has RawExtension as its type in your external, versioned struct, and Object in your internal struct. You also need to register your various plugin types. So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The step is to copy (using pkg/conversion) into the internal struct. The runtime package's DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension, turning it into the correct object type, and storing it in the Object. (TODO: In the case where the object is of an unknown type, a runtime.Unknown object will be created and stored.) Type object 1.147. io.k8s.apimachinery.pkg.util.intstr.IntOrString schema Description IntOrString is a type that can hold an int32 or a string. When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type. This allows you to have, for example, a JSON field that can accept a name or number. Type string 1.148. io.k8s.kube-aggregator.pkg.apis.apiregistration.v1.APIServiceList schema Description APIServiceList is a list of APIService objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIService) Items is the list of APIService kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.149. io.k8s.metrics.pkg.apis.metrics.v1beta1.NodeMetricsList schema Description NodeMetricsList is a list of NodeMetrics. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (NodeMetrics) List of node metrics. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.150. io.k8s.metrics.pkg.apis.metrics.v1beta1.PodMetricsList schema Description PodMetricsList is a list of PodMetrics. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodMetrics) List of pod metrics. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.151. io.k8s.migration.v1alpha1.StorageStateList schema Description StorageStateList is a list of StorageState Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageState) List of storagestates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.152. io.k8s.migration.v1alpha1.StorageVersionMigrationList schema Description StorageVersionMigrationList is a list of StorageVersionMigration Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageVersionMigration) List of storageversionmigrations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.153. io.k8s.networking.policy.v1alpha1.AdminNetworkPolicyList schema Description AdminNetworkPolicyList is a list of AdminNetworkPolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AdminNetworkPolicy) List of adminnetworkpolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.154. io.k8s.networking.policy.v1alpha1.BaselineAdminNetworkPolicyList schema Description BaselineAdminNetworkPolicyList is a list of BaselineAdminNetworkPolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BaselineAdminNetworkPolicy) List of baselineadminnetworkpolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.155. io.k8s.storage.snapshot.v1.VolumeSnapshotClassList schema Description VolumeSnapshotClassList is a list of VolumeSnapshotClass Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshotClass) List of volumesnapshotclasses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.156. io.k8s.storage.snapshot.v1.VolumeSnapshotContentList schema Description VolumeSnapshotContentList is a list of VolumeSnapshotContent Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshotContent) List of volumesnapshotcontents. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.157. io.k8s.storage.snapshot.v1.VolumeSnapshotList schema Description VolumeSnapshotList is a list of VolumeSnapshot Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshot) List of volumesnapshots. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.158. io.metal3.v1alpha1.BareMetalHostList schema Description BareMetalHostList is a list of BareMetalHost Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BareMetalHost) List of baremetalhosts. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.159. io.metal3.v1alpha1.BMCEventSubscriptionList schema Description BMCEventSubscriptionList is a list of BMCEventSubscription Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BMCEventSubscription) List of bmceventsubscriptions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.160. io.metal3.v1alpha1.DataImageList schema Description DataImageList is a list of DataImage Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DataImage) List of dataimages. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.161. io.metal3.v1alpha1.FirmwareSchemaList schema Description FirmwareSchemaList is a list of FirmwareSchema Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FirmwareSchema) List of firmwareschemas. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.162. io.metal3.v1alpha1.HardwareDataList schema Description HardwareDataList is a list of HardwareData Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HardwareData) List of hardwaredata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.163. io.metal3.v1alpha1.HostFirmwareComponentsList schema Description HostFirmwareComponentsList is a list of HostFirmwareComponents Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HostFirmwareComponents) List of hostfirmwarecomponents. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.164. io.metal3.v1alpha1.HostFirmwareSettingsList schema Description HostFirmwareSettingsList is a list of HostFirmwareSettings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HostFirmwareSettings) List of hostfirmwaresettings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.165. io.metal3.v1alpha1.PreprovisioningImageList schema Description PreprovisioningImageList is a list of PreprovisioningImage Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PreprovisioningImage) List of preprovisioningimages. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.166. io.metal3.v1alpha1.ProvisioningList schema Description ProvisioningList is a list of Provisioning Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Provisioning) List of provisionings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.167. io.openshift.apiserver.v1.APIRequestCountList schema Description APIRequestCountList is a list of APIRequestCount Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIRequestCount) List of apirequestcounts. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.168. io.openshift.authorization.v1.RoleBindingRestrictionList schema Description RoleBindingRestrictionList is a list of RoleBindingRestriction Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBindingRestriction) List of rolebindingrestrictions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.169. io.openshift.autoscaling.v1.ClusterAutoscalerList schema Description ClusterAutoscalerList is a list of ClusterAutoscaler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterAutoscaler) List of clusterautoscalers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.170. io.openshift.autoscaling.v1beta1.MachineAutoscalerList schema Description MachineAutoscalerList is a list of MachineAutoscaler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineAutoscaler) List of machineautoscalers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.171. io.openshift.cloudcredential.v1.CredentialsRequestList schema Description CredentialsRequestList is a list of CredentialsRequest Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CredentialsRequest) List of credentialsrequests. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.172. io.openshift.config.v1.APIServerList schema Description APIServerList is a list of APIServer Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIServer) List of apiservers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.173. io.openshift.config.v1.AuthenticationList schema Description AuthenticationList is a list of Authentication Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Authentication) List of authentications. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.174. io.openshift.config.v1.BuildList schema Description BuildList is a list of Build Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Build) List of builds. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.175. io.openshift.config.v1.ClusterOperatorList schema Description ClusterOperatorList is a list of ClusterOperator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterOperator) List of clusteroperators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.176. io.openshift.config.v1.ClusterVersionList schema Description ClusterVersionList is a list of ClusterVersion Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterVersion) List of clusterversions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.177. io.openshift.config.v1.ConsoleList schema Description ConsoleList is a list of Console Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Console) List of consoles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.178. io.openshift.config.v1.DNSList schema Description DNSList is a list of DNS Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DNS) List of dnses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.179. io.openshift.config.v1.FeatureGateList schema Description FeatureGateList is a list of FeatureGate Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FeatureGate) List of featuregates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.180. io.openshift.config.v1.ImageContentPolicyList schema Description ImageContentPolicyList is a list of ImageContentPolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageContentPolicy) List of imagecontentpolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.181. io.openshift.config.v1.ImageDigestMirrorSetList schema Description ImageDigestMirrorSetList is a list of ImageDigestMirrorSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageDigestMirrorSet) List of imagedigestmirrorsets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.182. io.openshift.config.v1.ImageList schema Description ImageList is a list of Image Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Image) List of images. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.183. io.openshift.config.v1.ImageTagMirrorSetList schema Description ImageTagMirrorSetList is a list of ImageTagMirrorSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageTagMirrorSet) List of imagetagmirrorsets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.184. io.openshift.config.v1.InfrastructureList schema Description InfrastructureList is a list of Infrastructure Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Infrastructure) List of infrastructures. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.185. io.openshift.config.v1.IngressList schema Description IngressList is a list of Ingress Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Ingress) List of ingresses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.186. io.openshift.config.v1.NetworkList schema Description NetworkList is a list of Network Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Network) List of networks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.187. io.openshift.config.v1.NodeList schema Description NodeList is a list of Node Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Node) List of nodes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.188. io.openshift.config.v1.OAuthList schema Description OAuthList is a list of OAuth Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuth) List of oauths. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.189. io.openshift.config.v1.OperatorHubList schema Description OperatorHubList is a list of OperatorHub Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorHub) List of operatorhubs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.190. io.openshift.config.v1.ProjectList schema Description ProjectList is a list of Project Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Project) List of projects. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.191. io.openshift.config.v1.ProxyList schema Description ProxyList is a list of Proxy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Proxy) List of proxies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.192. io.openshift.config.v1.SchedulerList schema Description SchedulerList is a list of Scheduler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Scheduler) List of schedulers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.193. io.openshift.console.v1.ConsoleCLIDownloadList schema Description ConsoleCLIDownloadList is a list of ConsoleCLIDownload Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleCLIDownload) List of consoleclidownloads. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.194. io.openshift.console.v1.ConsoleExternalLogLinkList schema Description ConsoleExternalLogLinkList is a list of ConsoleExternalLogLink Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleExternalLogLink) List of consoleexternalloglinks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.195. io.openshift.console.v1.ConsoleLinkList schema Description ConsoleLinkList is a list of ConsoleLink Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleLink) List of consolelinks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.196. io.openshift.console.v1.ConsoleNotificationList schema Description ConsoleNotificationList is a list of ConsoleNotification Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleNotification) List of consolenotifications. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.197. io.openshift.console.v1.ConsolePluginList schema Description ConsolePluginList is a list of ConsolePlugin Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsolePlugin) List of consoleplugins. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.198. io.openshift.console.v1.ConsoleQuickStartList schema Description ConsoleQuickStartList is a list of ConsoleQuickStart Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleQuickStart) List of consolequickstarts. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.199. io.openshift.console.v1.ConsoleSampleList schema Description ConsoleSampleList is a list of ConsoleSample Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleSample) List of consolesamples. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.200. io.openshift.console.v1.ConsoleYAMLSampleList schema Description ConsoleYAMLSampleList is a list of ConsoleYAMLSample Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleYAMLSample) List of consoleyamlsamples. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.201. io.openshift.helm.v1beta1.HelmChartRepositoryList schema Description HelmChartRepositoryList is a list of HelmChartRepository Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HelmChartRepository) List of helmchartrepositories. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.202. io.openshift.helm.v1beta1.ProjectHelmChartRepositoryList schema Description ProjectHelmChartRepositoryList is a list of ProjectHelmChartRepository Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ProjectHelmChartRepository) List of projecthelmchartrepositories. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.203. io.openshift.machine.v1.ControlPlaneMachineSetList schema Description ControlPlaneMachineSetList is a list of ControlPlaneMachineSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControlPlaneMachineSet) List of controlplanemachinesets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.204. io.openshift.machine.v1beta1.MachineHealthCheckList schema Description MachineHealthCheckList is a list of MachineHealthCheck Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineHealthCheck) List of machinehealthchecks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.205. io.openshift.machine.v1beta1.MachineList schema Description MachineList is a list of Machine Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Machine) List of machines. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.206. io.openshift.machine.v1beta1.MachineSetList schema Description MachineSetList is a list of MachineSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineSet) List of machinesets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.207. io.openshift.machineconfiguration.v1.ContainerRuntimeConfigList schema Description ContainerRuntimeConfigList is a list of ContainerRuntimeConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ContainerRuntimeConfig) List of containerruntimeconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.208. io.openshift.machineconfiguration.v1.ControllerConfigList schema Description ControllerConfigList is a list of ControllerConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControllerConfig) List of controllerconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.209. io.openshift.machineconfiguration.v1.KubeletConfigList schema Description KubeletConfigList is a list of KubeletConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeletConfig) List of kubeletconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.210. io.openshift.machineconfiguration.v1.MachineConfigList schema Description MachineConfigList is a list of MachineConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineConfig) List of machineconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.211. io.openshift.machineconfiguration.v1.MachineConfigPoolList schema Description MachineConfigPoolList is a list of MachineConfigPool Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineConfigPool) List of machineconfigpools. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.212. io.openshift.monitoring.v1.AlertingRuleList schema Description AlertingRuleList is a list of AlertingRule Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AlertingRule) List of alertingrules. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.213. io.openshift.monitoring.v1.AlertRelabelConfigList schema Description AlertRelabelConfigList is a list of AlertRelabelConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AlertRelabelConfig) List of alertrelabelconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.214. io.openshift.network.cloud.v1.CloudPrivateIPConfigList schema Description CloudPrivateIPConfigList is a list of CloudPrivateIPConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CloudPrivateIPConfig) List of cloudprivateipconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.215. io.openshift.operator.controlplane.v1alpha1.PodNetworkConnectivityCheckList schema Description PodNetworkConnectivityCheckList is a list of PodNetworkConnectivityCheck Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodNetworkConnectivityCheck) List of podnetworkconnectivitychecks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.216. io.openshift.operator.imageregistry.v1.ConfigList schema Description ConfigList is a list of Config Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Config) List of configs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.217. io.openshift.operator.imageregistry.v1.ImagePrunerList schema Description ImagePrunerList is a list of ImagePruner Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImagePruner) List of imagepruners. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.218. io.openshift.operator.ingress.v1.DNSRecordList schema Description DNSRecordList is a list of DNSRecord Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DNSRecord) List of dnsrecords. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.219. io.openshift.operator.network.v1.EgressRouterList schema Description EgressRouterList is a list of EgressRouter Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressRouter) List of egressrouters. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.220. io.openshift.operator.network.v1.OperatorPKIList schema Description OperatorPKIList is a list of OperatorPKI Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorPKI) List of operatorpkis. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.221. io.openshift.operator.samples.v1.ConfigList schema Description ConfigList is a list of Config Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Config) List of configs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.222. io.openshift.operator.v1.AuthenticationList schema Description AuthenticationList is a list of Authentication Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Authentication) List of authentications. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.223. io.openshift.operator.v1.CloudCredentialList schema Description CloudCredentialList is a list of CloudCredential Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CloudCredential) List of cloudcredentials. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.224. io.openshift.operator.v1.ClusterCSIDriverList schema Description ClusterCSIDriverList is a list of ClusterCSIDriver Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterCSIDriver) List of clustercsidrivers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.225. io.openshift.operator.v1.ConfigList schema Description ConfigList is a list of Config Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Config) List of configs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.226. io.openshift.operator.v1.ConsoleList schema Description ConsoleList is a list of Console Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Console) List of consoles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.227. io.openshift.operator.v1.CSISnapshotControllerList schema Description CSISnapshotControllerList is a list of CSISnapshotController Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSISnapshotController) List of csisnapshotcontrollers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.228. io.openshift.operator.v1.DNSList schema Description DNSList is a list of DNS Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DNS) List of dnses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.229. io.openshift.operator.v1.EtcdList schema Description EtcdList is a list of Etcd Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Etcd) List of etcds. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.230. io.openshift.operator.v1.IngressControllerList schema Description IngressControllerList is a list of IngressController Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IngressController) List of ingresscontrollers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.231. io.openshift.operator.v1.InsightsOperatorList schema Description InsightsOperatorList is a list of InsightsOperator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (InsightsOperator) List of insightsoperators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.232. io.openshift.operator.v1.KubeAPIServerList schema Description KubeAPIServerList is a list of KubeAPIServer Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeAPIServer) List of kubeapiservers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.233. io.openshift.operator.v1.KubeControllerManagerList schema Description KubeControllerManagerList is a list of KubeControllerManager Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeControllerManager) List of kubecontrollermanagers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.234. io.openshift.operator.v1.KubeSchedulerList schema Description KubeSchedulerList is a list of KubeScheduler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeScheduler) List of kubeschedulers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.235. io.openshift.operator.v1.KubeStorageVersionMigratorList schema Description KubeStorageVersionMigratorList is a list of KubeStorageVersionMigrator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeStorageVersionMigrator) List of kubestorageversionmigrators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.236. io.openshift.operator.v1.MachineConfigurationList schema Description MachineConfigurationList is a list of MachineConfiguration Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineConfiguration) List of machineconfigurations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.237. io.openshift.operator.v1.NetworkList schema Description NetworkList is a list of Network Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Network) List of networks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.238. io.openshift.operator.v1.OpenShiftAPIServerList schema Description OpenShiftAPIServerList is a list of OpenShiftAPIServer Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OpenShiftAPIServer) List of openshiftapiservers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.239. io.openshift.operator.v1.OpenShiftControllerManagerList schema Description OpenShiftControllerManagerList is a list of OpenShiftControllerManager Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OpenShiftControllerManager) List of openshiftcontrollermanagers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.240. io.openshift.operator.v1.ServiceCAList schema Description ServiceCAList is a list of ServiceCA Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceCA) List of servicecas. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.241. io.openshift.operator.v1.StorageList schema Description StorageList is a list of Storage Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Storage) List of storages. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.242. io.openshift.operator.v1alpha1.ImageContentSourcePolicyList schema Description ImageContentSourcePolicyList is a list of ImageContentSourcePolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageContentSourcePolicy) List of imagecontentsourcepolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.243. io.openshift.performance.v2.PerformanceProfileList schema Description PerformanceProfileList is a list of PerformanceProfile Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PerformanceProfile) List of performanceprofiles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.244. io.openshift.quota.v1.ClusterResourceQuotaList schema Description ClusterResourceQuotaList is a list of ClusterResourceQuota Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterResourceQuota) List of clusterresourcequotas. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.245. io.openshift.security.v1.SecurityContextConstraintsList schema Description SecurityContextConstraintsList is a list of SecurityContextConstraints Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (SecurityContextConstraints) List of securitycontextconstraints. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.246. io.openshift.tuned.v1.ProfileList schema Description ProfileList is a list of Profile Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Profile) List of profiles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.247. io.openshift.tuned.v1.TunedList schema Description TunedList is a list of Tuned Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Tuned) List of tuneds. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.248. io.x-k8s.cluster.infrastructure.v1beta1.Metal3RemediationList schema Description Metal3RemediationList is a list of Metal3Remediation Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Metal3Remediation) List of metal3remediations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.249. io.x-k8s.cluster.infrastructure.v1beta1.Metal3RemediationTemplateList schema Description Metal3RemediationTemplateList is a list of Metal3RemediationTemplate Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Metal3RemediationTemplate) List of metal3remediationtemplates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.250. io.x-k8s.cluster.ipam.v1beta1.IPAddressClaimList schema Description IPAddressClaimList is a list of IPAddressClaim Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IPAddressClaim) List of ipaddressclaims. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.251. io.x-k8s.cluster.ipam.v1beta1.IPAddressList schema Description IPAddressList is a list of IPAddress Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IPAddress) List of ipaddresses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.252. org.ovn.k8s.v1.AdminPolicyBasedExternalRouteList schema Description AdminPolicyBasedExternalRouteList is a list of AdminPolicyBasedExternalRoute Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AdminPolicyBasedExternalRoute) List of adminpolicybasedexternalroutes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.253. org.ovn.k8s.v1.EgressFirewallList schema Description EgressFirewallList is a list of EgressFirewall Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressFirewall) List of egressfirewalls. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.254. org.ovn.k8s.v1.EgressIPList schema Description EgressIPList is a list of EgressIP Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressIP) List of egressips. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.255. org.ovn.k8s.v1.EgressQoSList schema Description EgressQoSList is a list of EgressQoS Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressQoS) List of egressqoses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.256. org.ovn.k8s.v1.EgressServiceList schema Description EgressServiceList is a list of EgressService Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressService) List of egressservices. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
|
[
"<quantity> ::= <signedNumber><suffix>",
"(Note that <suffix> may be empty, from the \"\" case in <decimalSI>.)",
"(International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)",
"(Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)",
"type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.Object `json:\"myPlugin\"` }",
"type PluginA struct { AOption string `json:\"aOption\"` }",
"type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.RawExtension `json:\"myPlugin\"` }",
"type PluginA struct { AOption string `json:\"aOption\"` }",
"{ \"kind\":\"MyAPIObject\", \"apiVersion\":\"v1\", \"myPlugin\": { \"kind\":\"PluginA\", \"aOption\":\"foo\", }, }"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/common_object_reference/api-object-reference
|
Chapter 2. Downloading log files and diagnostic information using must-gather
|
Chapter 2. Downloading log files and diagnostic information using must-gather If Red Hat OpenShift Data Foundation is unable to automatically resolve a problem, use the must-gather tool to collect log files and diagnostic information so that you or Red Hat support can review the problem and determine a solution. Important When Red Hat OpenShift Data Foundation is deployed in external mode, must-gather only collects logs from the OpenShift Data Foundation cluster and does not collect debug data and logs from the external Red Hat Ceph Storage cluster. To collect debug logs from the external Red Hat Ceph Storage cluster, see Red Hat Ceph Storage Troubleshooting guide and contact your Red Hat Ceph Storage Administrator. Prerequisites Optional: If OpenShift Data Foundation is deployed in a disconnected environment, ensure that you mirror the individual must-gather image to the mirror registry available from the disconnected environment. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. <path-to-the-registry-config> Is the path to your registry credentials, by default it is ~/.docker/config.json . --insecure Add this flag only if the mirror registry is insecure. For more information, see the Red Hat Knowledgebase solutions: How to mirror images between Redhat Openshift registries Failed to mirror OpenShift image repository when private registry is insecure Procedure Run the must-gather command from the client connected to the OpenShift Data Foundation cluster: <directory-name> Is the name of the directory where you want to write the data to. Important For a disconnected environment deployment, replace the image in --image parameter with the mirrored must-gather image. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. This collects the following information in the specified directory: All Red Hat OpenShift Data Foundation cluster related Custom Resources (CRs) with their namespaces. Pod logs of all the Red Hat OpenShift Data Foundation related pods. Output of some standard Ceph commands like Status, Cluster health, and others. 2.1. Variations of must-gather-commands If one or more master nodes are not in the Ready state, use --node-name to provide a master node that is Ready so that the must-gather pod can be safely scheduled. If you want to gather information from a specific time: To specify a relative time period for logs gathered, such as within 5 seconds or 2 days, add /usr/bin/gather since=<duration> : To specify a specific time to gather logs after, add /usr/bin/gather since-time=<rfc3339-timestamp> : Replace the example values in these commands as follows: <node-name> If one or more master nodes are not in the Ready state, use this parameter to provide the name of a master node that is still in the Ready state. This avoids scheduling errors by ensuring that the must-gather pod is not scheduled on a master node that is not ready. <directory-name> The directory to store information collected by must-gather . <duration> Specify the period of time to collect information from as a relative duration, for example, 5h (starting from 5 hours ago). <rfc3339-timestamp> Specify the period of time to collect information from as an RFC 3339 timestamp, for example, 2020-11-10T04:00:00+00:00 (starting from 4 am UTC on 11 Nov 2020). 2.2. Running must-gather in modular mode Red Hat OpenShift Data Foundation must-gather can take a long time to run in some environments. To avoid this, run must-gather in modular mode and collect only the resources you require using the following command: Replace < -arg> with one or more of the following arguments to specify the resources for which the must-gather logs is required. -o , --odf ODF logs (includes Ceph resources, namespaced resources, clusterscoped resources and Ceph logs) -d , --dr DR logs -n , --noobaa Noobaa logs -c , --ceph Ceph commands and pod logs -cl , --ceph-logs Ceph daemon, kernel, journal logs, and crash reports -ns , --namespaced namespaced resources -cs , --clusterscoped clusterscoped resources -pc , --provider openshift-storage-client logs from a provider/consumer cluster (includes all the logs under operator namespace, pods, deployments, secrets, configmap, and other resources) -h , --help Print help message Note If no < -arg> is included, must-gather will collect all logs.
|
[
"oc image mirror registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 <local-registry> /odf4/odf-must-gather-rhel9:v4.15 [--registry-config= <path-to-the-registry-config> ] [--insecure=true]",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir= <directory-name>",
"oc adm must-gather --image=<local-registry>/odf4/odf-must-gather-rhel9:v4.15 --dest-dir= <directory-name>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ --node-name=_<node-name>_",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ /usr/bin/gather since=<duration>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ /usr/bin/gather since-time=<rfc3339-timestamp>",
"oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 -- /usr/bin/gather <-arg>"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/troubleshooting_openshift_data_foundation/downloading-log-files-and-diagnostic-information_rhodf
|
2.13. Using the Notification API
|
2.13. Using the Notification API The cgroups notification API allows user space applications to receive notifications about the changing status of a cgroup. Currently, the notification API only supports monitoring of the Out of Memory (OOM) control file: memory.oom_control . To create a notification handler, write a C program using the following instructions: Using the eventfd() function, create a file descriptor for event notifications. For more information, refer to the eventfd(2) man page. To monitor the memory.oom_control file, open it using the open() function. For more information, refer to the open(2) man page. Use the write() function to write the following arguments to the cgroup.event_control file of the cgroup whose memory.oom_control file you are monitoring: where: event_file_descriptor is used to open the cgroup.event_control file, and OOM_control_file_descriptor is used to open the respective memory.oom_control file. For more information on writing to a file, refer to the write(1) man page. When the above program is started, it will be notified of any OOM situation in the cgroup it is monitoring. Note that OOM notifications only work in non-root cgroups. For more information on the memory.oom_control tunable parameter, refer to Section 3.7, "memory" . For more information on configuring notifications for OOM control, refer to Example 3.3, "OOM Control and Notifications" .
|
[
"<event_file_descriptor> <OOM_control_file_descriptor>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-using_the_notification_api
|
Chapter 1. Red Hat Integration
|
Chapter 1. Red Hat Integration Red Hat Integration is a comprehensive set of integration and event processing technologies for creating, extending, and deploying container-based integration services across hybrid and multicloud environments. Red Hat Integration provides an agile, distributed, and API-centric solution that organizations can use to connect and share data between applications and systems required in a digital world. Red Hat Integration includes the following capabilities: Real-time messaging Cross-datacenter message streaming API connectivity Application connectors Enterprise integration patterns API management Data transformation Service composition and orchestration Additional resources Understanding enterprise integration
| null |
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/release_notes_for_red_hat_integration_2023.q4/about-red-hat-integration_integration
|
Chapter 2. Generating a new dataset with Synthetic data generation (SDG)
|
Chapter 2. Generating a new dataset with Synthetic data generation (SDG) After customizing your taxonomy tree, you can generate a synthetic dataset using the Synthetic Data Generation (SDG) process on Red Hat Enterprise Linux AI. SDG is a process that creates an artificially generated dataset that mimics real data based on provided examples. SDG uses a YAML file containing question-and-answer pairs as input data. With these examples, SDG utilizes the mixtral-8x7b-instruct-v0-1 LLM as a teacher model to generate similar question-and-answer pairs. In the SDG pipeline, many questions are generated and scored based on quality, where the mixtral-8x7b-instruct-v0-1 model assesses the quality of these questions. The pipeline then selects the highest-scoring questions, generates corresponding answers, and includes these pairs in the synthetic dataset. 2.1. Creating a synthetic dataset using your examples You can use your examples and run the SDG process to create a synthetic dataset. Prerequisites You installed RHEL AI with the bootable container image. You created a custom qna.yaml file with knowledge data. You downloaded the mixtral-8x7b-instruct-v0-1 teacher model for SDG. You downloaded the skills-adapter-v3:1.1 and knowledge-adapter-v3:1.1 LoRA layered skills and knowledge adapter. You have root user access on your machine. Procedure To generate a new synthetic dataset, based on your custom taxonomy with knowledge, run the following command: USD ilab data generate This command runs SDG with mixtral-8x7B-instruct as the teacher model Note You can use the --enable-serving-output flag when running the ilab data generate command to display the vLLM startup logs. At the start of the SDG process, vLLM attempts to start a server. Example output of vLLM attempting to start a server Starting a temporary vLLM server at http://127.0.0.1:47825/v1 INFO 2024-08-22 17:01:09,461 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 1/120 INFO 2024-08-22 17:01:14,213 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 2/120 INFO 2024-08-22 17:01:19,142 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 3/120 Once vLLM connects, the SDG process starts creating synthetic data from your examples. Example output of vLLM connecting and SDG generating INFO 2024-08-22 15:16:38,933 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 73/120 INFO 2024-08-22 15:16:43,497 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 74/120 INFO 2024-08-22 15:16:45,949 instructlab.model.backends.backends:487: vLLM engine successfully started at http://127.0.0.1:49311/v1 Generating synthetic data using '/usr/share/instructlab/sdg/pipelines/agentic' pipeline, '/var/home/cloud-user/.cache/instructlab/models/mixtral-8x7b-instruct-v0-1' model, '/var/home/cloud-user/.local/share/instructlab/taxonomy' taxonomy, against http://127.0.0.1:49311/v1 server INFO 2024-08-22 15:16:46,594 instructlab.sdg:375: Synthesizing new instructions. If you aren't satisfied with the generated instructions, interrupt training (Ctrl-C) and try adjusting your YAML files. Adding more examples may help. The SDG process completes when the CLI displays the location of your new data set. Example output of a successful SDG run INFO 2024-08-16 17:12:46,548 instructlab.sdg.datamixing:200: Mixed Dataset saved to /home/example-user/.local/share/instructlab/datasets/skills_train_msgs_2024-08-16T16_50_11.jsonl INFO 2024-08-16 17:12:46,549 instructlab.sdg:438: Generation took 1355.74s Note This process can be time consuming depending on your hardware specifications. Verify the files are created by running the following command: USD ls ~/.local/share/instructlab/datasets/ Example output knowledge_recipe_2024-08-13T20_54_21.yaml skills_recipe_2024-08-13T20_54_21.yaml knowledge_train_msgs_2024-08-13T20_54_21.jsonl skills_train_msgs_2024-08-13T20_54_21.jsonl messages_granite-7b-lab-Q4_K_M_2024-08-13T20_54_21.jsonl node_datasets_2024-08-13T15_12_12/ Important Make a note of your most recent knowledge_train_msgs.jsonl and skills_train_msgs.jsonl file. You need to specify this file during multi-phase training. Each JSONL has the time stamp on the file, for example knowledge_train_msgs_2024-08-08T20_04_28.jsonl , use the most recent file when training. Optional: You can view output of SDG by navigating to the ~/.local/share/datasets/ directory and opening the JSONL file. USD cat ~/.local/share/datasets/<jsonl-dataset> Example output of a SDG JSONL file {"messages":[{"content":"I am, Red Hat\u00ae Instruct Model based on Granite 7B, an AI language model developed by Red Hat and IBM Research, based on the Granite-7b-base language model. My primary function is to be a chat assistant.","role":"system"},{"content":"<|user|>\n### Deep-sky objects\n\nThe constellation does not lie on the [galactic\nplane](galactic_plane \"wikilink\") of the Milky Way, and there are no\nprominent star clusters. [NGC 625](NGC_625 \"wikilink\") is a dwarf\n[irregular galaxy](irregular_galaxy \"wikilink\") of apparent magnitude\n11.0 and lying some 12.7 million light years distant. Only 24000 light\nyears in diameter, it is an outlying member of the [Sculptor\nGroup](Sculptor_Group \"wikilink\"). NGC 625 is thought to have been\ninvolved in a collision and is experiencing a burst of [active star\nformation](Active_galactic_nucleus \"wikilink\"). [NGC\n37](NGC_37 \"wikilink\") is a [lenticular\ngalaxy](lenticular_galaxy \"wikilink\") of apparent magnitude 14.66. It is\napproximately 42 [kiloparsecs](kiloparsecs \"wikilink\") (137,000\n[light-years](light-years \"wikilink\")) in diameter and about 12.9\nbillion years old. [Robert's Quartet](Robert's_Quartet \"wikilink\")\n(composed of the irregular galaxy [NGC 87](NGC_87 \"wikilink\"), and three\nspiral galaxies [NGC 88](NGC_88 \"wikilink\"), [NGC 89](NGC_89 \"wikilink\")\nand [NGC 92](NGC_92 \"wikilink\")) is a group of four galaxies located\naround 160 million light-years away which are in the process of\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\n243-49 is [HLX-1](HLX-1 \"wikilink\"), an [intermediate-mass black\nhole](intermediate-mass_black_hole \"wikilink\")the first one of its kind\nidentified. It is thought to be a remnant of a dwarf galaxy that was\nabsorbed in a [collision](Interacting_galaxy \"wikilink\") with ESO\n243-49. Before its discovery, this class of black hole was only\nhypothesized.\n\nLying within the bounds of the constellation is the gigantic [Phoenix\ncluster](Phoenix_cluster \"wikilink\"), which is around 7.3 million light\nyears wide and 5.7 billion light years away, making it one of the most\nmassive [galaxy clusters](galaxy_cluster \"wikilink\"). It was first\ndiscovered in 2010, and the central galaxy is producing an estimated 740\nnew stars a year. Larger still is [El\nGordo](El_Gordo_(galaxy_cluster) \"wikilink\"), or officially ACT-CL\nJ0102-4915, whose discovery was announced in 2012. Located around\n7.2 billion light years away, it is composed of two subclusters in the\nprocess of colliding, resulting in the spewing out of hot gas, seen in\nX-rays and infrared images.\n\n### Meteor showers\n\nPhoenix is the [radiant](radiant_(meteor_shower) \"wikilink\") of two\nannual [meteor showers](meteor_shower \"wikilink\"). The\n[Phoenicids](Phoenicids \"wikilink\"), also known as the December\nPhoenicids, were first observed on 3 December 1887. The shower was\nparticularly intense in December 1956, and is thought related to the\nbreakup of the [short-period comet](short-period_comet \"wikilink\")\n[289P\/Blanpain](289P\/Blanpain \"wikilink\"). It peaks around 45 December,\nthough is not seen every year. A very minor meteor shower peaks\naround July 14 with around one meteor an hour, though meteors can be\nseen anytime from July 3 to 18; this shower is referred to as the July\nPhoenicids.\n\nHow many light years wide is the Phoenix cluster?\n<|assistant|>\n' 'The Phoenix cluster is around 7.3 million light years wide.'","role":"pretraining"}],"metadata":"{\"sdg_document\": \"### Deep-sky objects\\n\\nThe constellation does not lie on the [galactic\\nplane](galactic_plane \\\"wikilink\\\") of the Milky Way, and there are no\\nprominent star clusters. [NGC 625](NGC_625 \\\"wikilink\\\") is a dwarf\\n[irregular galaxy](irregular_galaxy \\\"wikilink\\\") of apparent magnitude\\n11.0 and lying some 12.7 million light years distant. Only 24000 light\\nyears in diameter, it is an outlying member of the [Sculptor\\nGroup](Sculptor_Group \\\"wikilink\\\"). NGC 625 is thought to have been\\ninvolved in a collision and is experiencing a burst of [active star\\nformation](Active_galactic_nucleus \\\"wikilink\\\"). [NGC\\n37](NGC_37 \\\"wikilink\\\") is a [lenticular\\ngalaxy](lenticular_galaxy \\\"wikilink\\\") of apparent magnitude 14.66. It is\\napproximately 42 [kiloparsecs](kiloparsecs \\\"wikilink\\\") (137,000\\n[light-years](light-years \\\"wikilink\\\")) in diameter and about 12.9\\nbillion years old. [Robert's Quartet](Robert's_Quartet \\\"wikilink\\\")\\n(composed of the irregular galaxy [NGC 87](NGC_87 \\\"wikilink\\\"), and three\\nspiral galaxies [NGC 88](NGC_88 \\\"wikilink\\\"), [NGC 89](NGC_89 \\\"wikilink\\\")\\nand [NGC 92](NGC_92 \\\"wikilink\\\")) is a group of four galaxies located\\naround 160 million light-years away which are in the process of\\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\\n243-49 is [HLX-1](HLX-1 \\\"wikilink\\\"), an [intermediate-mass black\\nhole](intermediate-mass_black_hole \\\"wikilink\\\")\the first one of its kind\\nidentified. It is thought to be a remnant of a dwarf galaxy that was\\nabsorbed in a [collision](Interacting_galaxy \\\"wikilink\\\") with ESO\\n243-49. Before its discovery, this class of black hole was only\\nhypothesized.\\n\\nLying within the bounds of the constellation is the gigantic [Phoenix\\ncluster](Phoenix_cluster \\\"wikilink\\\"), which is around 7.3 million light\\nyears wide and 5.7 billion light years away, making it one of the most\\nmassive [galaxy clusters](galaxy_cluster \\\"wikilink\\\"). It was first\\ndiscovered in 2010, and the central galaxy is producing an estimated 740\\nnew stars a year. Larger still is [El\\nGordo](El_Gordo_(galaxy_cluster) \\\"wikilink\\\"), or officially ACT-CL\\nJ0102-4915, whose discovery was announced in 2012. Located around\\n7.2 billion light years away, it is composed of two subclusters in the\\nprocess of colliding, resulting in the spewing out of hot gas, seen in\\nX-rays and infrared images.\\n\\n### Meteor showers\\n\\nPhoenix is the [radiant](radiant_(meteor_shower) \\\"wikilink\\\") of two\\nannual [meteor showers](meteor_shower \\\"wikilink\\\"). The\\n[Phoenicids](Phoenicids \\\"wikilink\\\"), also known as the December\\nPhoenicids, were first observed on 3 December 1887. The shower was\\nparticularly intense in December 1956, and is thought related to the\\nbreakup of the [short-period comet](short-period_comet \\\"wikilink\\\")\\n[289P\/Blanpain](289P\/Blanpain \\\"wikilink\\\"). It peaks around 4\5 December,\\nthough is not seen every year. A very minor meteor shower peaks\\naround July 14 with around one meteor an hour, though meteors can be\\nseen anytime from July 3 to 18; this shower is referred to as the July\\nPhoenicids.\", \"domain\": \"astronomy\", \"dataset\": \"document_knowledge_qa\"}","id":"1df7c219-a062-4511-8bae-f55c88927dc1"}
|
[
"ilab data generate",
"Starting a temporary vLLM server at http://127.0.0.1:47825/v1 INFO 2024-08-22 17:01:09,461 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 1/120 INFO 2024-08-22 17:01:14,213 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 2/120 INFO 2024-08-22 17:01:19,142 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 3/120",
"INFO 2024-08-22 15:16:38,933 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 73/120 INFO 2024-08-22 15:16:43,497 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 74/120 INFO 2024-08-22 15:16:45,949 instructlab.model.backends.backends:487: vLLM engine successfully started at http://127.0.0.1:49311/v1 Generating synthetic data using '/usr/share/instructlab/sdg/pipelines/agentic' pipeline, '/var/home/cloud-user/.cache/instructlab/models/mixtral-8x7b-instruct-v0-1' model, '/var/home/cloud-user/.local/share/instructlab/taxonomy' taxonomy, against http://127.0.0.1:49311/v1 server INFO 2024-08-22 15:16:46,594 instructlab.sdg:375: Synthesizing new instructions. If you aren't satisfied with the generated instructions, interrupt training (Ctrl-C) and try adjusting your YAML files. Adding more examples may help.",
"INFO 2024-08-16 17:12:46,548 instructlab.sdg.datamixing:200: Mixed Dataset saved to /home/example-user/.local/share/instructlab/datasets/skills_train_msgs_2024-08-16T16_50_11.jsonl INFO 2024-08-16 17:12:46,549 instructlab.sdg:438: Generation took 1355.74s",
"ls ~/.local/share/instructlab/datasets/",
"knowledge_recipe_2024-08-13T20_54_21.yaml skills_recipe_2024-08-13T20_54_21.yaml knowledge_train_msgs_2024-08-13T20_54_21.jsonl skills_train_msgs_2024-08-13T20_54_21.jsonl messages_granite-7b-lab-Q4_K_M_2024-08-13T20_54_21.jsonl node_datasets_2024-08-13T15_12_12/",
"cat ~/.local/share/datasets/<jsonl-dataset>",
"{\"messages\":[{\"content\":\"I am, Red Hat\\u00ae Instruct Model based on Granite 7B, an AI language model developed by Red Hat and IBM Research, based on the Granite-7b-base language model. My primary function is to be a chat assistant.\",\"role\":\"system\"},{\"content\":\"<|user|>\\n### Deep-sky objects\\n\\nThe constellation does not lie on the [galactic\\nplane](galactic_plane \\\"wikilink\\\") of the Milky Way, and there are no\\nprominent star clusters. [NGC 625](NGC_625 \\\"wikilink\\\") is a dwarf\\n[irregular galaxy](irregular_galaxy \\\"wikilink\\\") of apparent magnitude\\n11.0 and lying some 12.7 million light years distant. Only 24000 light\\nyears in diameter, it is an outlying member of the [Sculptor\\nGroup](Sculptor_Group \\\"wikilink\\\"). NGC 625 is thought to have been\\ninvolved in a collision and is experiencing a burst of [active star\\nformation](Active_galactic_nucleus \\\"wikilink\\\"). [NGC\\n37](NGC_37 \\\"wikilink\\\") is a [lenticular\\ngalaxy](lenticular_galaxy \\\"wikilink\\\") of apparent magnitude 14.66. It is\\napproximately 42 [kiloparsecs](kiloparsecs \\\"wikilink\\\") (137,000\\n[light-years](light-years \\\"wikilink\\\")) in diameter and about 12.9\\nbillion years old. [Robert's Quartet](Robert's_Quartet \\\"wikilink\\\")\\n(composed of the irregular galaxy [NGC 87](NGC_87 \\\"wikilink\\\"), and three\\nspiral galaxies [NGC 88](NGC_88 \\\"wikilink\\\"), [NGC 89](NGC_89 \\\"wikilink\\\")\\nand [NGC 92](NGC_92 \\\"wikilink\\\")) is a group of four galaxies located\\naround 160 million light-years away which are in the process of\\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\\n243-49 is [HLX-1](HLX-1 \\\"wikilink\\\"), an [intermediate-mass black\\nhole](intermediate-mass_black_hole \\\"wikilink\\\")the first one of its kind\\nidentified. It is thought to be a remnant of a dwarf galaxy that was\\nabsorbed in a [collision](Interacting_galaxy \\\"wikilink\\\") with ESO\\n243-49. Before its discovery, this class of black hole was only\\nhypothesized.\\n\\nLying within the bounds of the constellation is the gigantic [Phoenix\\ncluster](Phoenix_cluster \\\"wikilink\\\"), which is around 7.3 million light\\nyears wide and 5.7 billion light years away, making it one of the most\\nmassive [galaxy clusters](galaxy_cluster \\\"wikilink\\\"). It was first\\ndiscovered in 2010, and the central galaxy is producing an estimated 740\\nnew stars a year. Larger still is [El\\nGordo](El_Gordo_(galaxy_cluster) \\\"wikilink\\\"), or officially ACT-CL\\nJ0102-4915, whose discovery was announced in 2012. Located around\\n7.2 billion light years away, it is composed of two subclusters in the\\nprocess of colliding, resulting in the spewing out of hot gas, seen in\\nX-rays and infrared images.\\n\\n### Meteor showers\\n\\nPhoenix is the [radiant](radiant_(meteor_shower) \\\"wikilink\\\") of two\\nannual [meteor showers](meteor_shower \\\"wikilink\\\"). The\\n[Phoenicids](Phoenicids \\\"wikilink\\\"), also known as the December\\nPhoenicids, were first observed on 3 December 1887. The shower was\\nparticularly intense in December 1956, and is thought related to the\\nbreakup of the [short-period comet](short-period_comet \\\"wikilink\\\")\\n[289P\\/Blanpain](289P\\/Blanpain \\\"wikilink\\\"). It peaks around 45 December,\\nthough is not seen every year. A very minor meteor shower peaks\\naround July 14 with around one meteor an hour, though meteors can be\\nseen anytime from July 3 to 18; this shower is referred to as the July\\nPhoenicids.\\n\\nHow many light years wide is the Phoenix cluster?\\n<|assistant|>\\n' 'The Phoenix cluster is around 7.3 million light years wide.'\",\"role\":\"pretraining\"}],\"metadata\":\"{\\\"sdg_document\\\": \\\"### Deep-sky objects\\\\n\\\\nThe constellation does not lie on the [galactic\\\\nplane](galactic_plane \\\\\\\"wikilink\\\\\\\") of the Milky Way, and there are no\\\\nprominent star clusters. [NGC 625](NGC_625 \\\\\\\"wikilink\\\\\\\") is a dwarf\\\\n[irregular galaxy](irregular_galaxy \\\\\\\"wikilink\\\\\\\") of apparent magnitude\\\\n11.0 and lying some 12.7 million light years distant. Only 24000 light\\\\nyears in diameter, it is an outlying member of the [Sculptor\\\\nGroup](Sculptor_Group \\\\\\\"wikilink\\\\\\\"). NGC 625 is thought to have been\\\\ninvolved in a collision and is experiencing a burst of [active star\\\\nformation](Active_galactic_nucleus \\\\\\\"wikilink\\\\\\\"). [NGC\\\\n37](NGC_37 \\\\\\\"wikilink\\\\\\\") is a [lenticular\\\\ngalaxy](lenticular_galaxy \\\\\\\"wikilink\\\\\\\") of apparent magnitude 14.66. It is\\\\napproximately 42 [kiloparsecs](kiloparsecs \\\\\\\"wikilink\\\\\\\") (137,000\\\\n[light-years](light-years \\\\\\\"wikilink\\\\\\\")) in diameter and about 12.9\\\\nbillion years old. [Robert's Quartet](Robert's_Quartet \\\\\\\"wikilink\\\\\\\")\\\\n(composed of the irregular galaxy [NGC 87](NGC_87 \\\\\\\"wikilink\\\\\\\"), and three\\\\nspiral galaxies [NGC 88](NGC_88 \\\\\\\"wikilink\\\\\\\"), [NGC 89](NGC_89 \\\\\\\"wikilink\\\\\\\")\\\\nand [NGC 92](NGC_92 \\\\\\\"wikilink\\\\\\\")) is a group of four galaxies located\\\\naround 160 million light-years away which are in the process of\\\\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\\\\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\\\\n243-49 is [HLX-1](HLX-1 \\\\\\\"wikilink\\\\\\\"), an [intermediate-mass black\\\\nhole](intermediate-mass_black_hole \\\\\\\"wikilink\\\\\\\")\\the first one of its kind\\\\nidentified. It is thought to be a remnant of a dwarf galaxy that was\\\\nabsorbed in a [collision](Interacting_galaxy \\\\\\\"wikilink\\\\\\\") with ESO\\\\n243-49. Before its discovery, this class of black hole was only\\\\nhypothesized.\\\\n\\\\nLying within the bounds of the constellation is the gigantic [Phoenix\\\\ncluster](Phoenix_cluster \\\\\\\"wikilink\\\\\\\"), which is around 7.3 million light\\\\nyears wide and 5.7 billion light years away, making it one of the most\\\\nmassive [galaxy clusters](galaxy_cluster \\\\\\\"wikilink\\\\\\\"). It was first\\\\ndiscovered in 2010, and the central galaxy is producing an estimated 740\\\\nnew stars a year. Larger still is [El\\\\nGordo](El_Gordo_(galaxy_cluster) \\\\\\\"wikilink\\\\\\\"), or officially ACT-CL\\\\nJ0102-4915, whose discovery was announced in 2012. Located around\\\\n7.2 billion light years away, it is composed of two subclusters in the\\\\nprocess of colliding, resulting in the spewing out of hot gas, seen in\\\\nX-rays and infrared images.\\\\n\\\\n### Meteor showers\\\\n\\\\nPhoenix is the [radiant](radiant_(meteor_shower) \\\\\\\"wikilink\\\\\\\") of two\\\\nannual [meteor showers](meteor_shower \\\\\\\"wikilink\\\\\\\"). The\\\\n[Phoenicids](Phoenicids \\\\\\\"wikilink\\\\\\\"), also known as the December\\\\nPhoenicids, were first observed on 3 December 1887. The shower was\\\\nparticularly intense in December 1956, and is thought related to the\\\\nbreakup of the [short-period comet](short-period_comet \\\\\\\"wikilink\\\\\\\")\\\\n[289P\\/Blanpain](289P\\/Blanpain \\\\\\\"wikilink\\\\\\\"). It peaks around 4\\5 December,\\\\nthough is not seen every year. A very minor meteor shower peaks\\\\naround July 14 with around one meteor an hour, though meteors can be\\\\nseen anytime from July 3 to 18; this shower is referred to as the July\\\\nPhoenicids.\\\", \\\"domain\\\": \\\"astronomy\\\", \\\"dataset\\\": \\\"document_knowledge_qa\\\"}\",\"id\":\"1df7c219-a062-4511-8bae-f55c88927dc1\"}"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html/creating_a_custom_llm_using_rhel_ai/generate_sdg
|
Chapter 2. Configuring an Azure account
|
Chapter 2. Configuring an Azure account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 44 20 per region A default cluster requires 44 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap and control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the compute machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 44 vCPUs. The bootstrap node VM, which uses 8 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage . 2.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. 2.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 2.4. Required Azure roles OpenShift Container Platform needs a service principal so it can manage Microsoft Azure resources. Before you can create a service principal, review the following information: Your Azure account subscription must have the following roles: User Access Administrator Contributor Your Azure Active Directory (AD) must have the following permission: "microsoft.directory/servicePrincipals/createAsOwner" To set roles on the Azure portal, see the Manage access to Azure resources using RBAC and the Azure portal in the Azure documentation. 2.5. Required Azure permissions for installer-provisioned infrastructure When you assign Contributor and User Access Administrator roles to the service principal, you automatically grant all the required permissions. If your organization's security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions. The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure. Example 2.1. Required permissions for creating authorization resources Microsoft.Authorization/policies/audit/action Microsoft.Authorization/policies/auditIfNotExists/action Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/write Example 2.2. Required permissions for creating compute resources Microsoft.Compute/availabilitySets/read Microsoft.Compute/availabilitySets/write Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/galleries/images/write Microsoft.Compute/galleries/read Microsoft.Compute/galleries/write Microsoft.Compute/snapshots/read Microsoft.Compute/snapshots/write Microsoft.Compute/snapshots/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Example 2.3. Required permissions for creating identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Example 2.4. Required permissions for creating network resources Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/backendAddressPools/read Microsoft.Network/loadBalancers/backendAddressPools/write Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/securityRules/delete Microsoft.Network/networkSecurityGroups/securityRules/read Microsoft.Network/networkSecurityGroups/securityRules/write Microsoft.Network/networkSecurityGroups/write Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/SOA/read Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/write Microsoft.Network/privateDnsZones/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/virtualNetworks/join/action Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Network/virtualNetworks/write Note The following permissions are not required to create the private OpenShift Container Platform cluster on Azure. Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Example 2.5. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/InProgress/action Microsoft.Resourcehealth/healthevent/Pending/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 2.6. Required permissions for creating a resource group Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourcegroups/write Example 2.7. Required permissions for creating resource tags Microsoft.Resources/tags/write Example 2.8. Required permissions for creating storage resources Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Example 2.9. Optional permissions for creating marketplace virtual machine resources Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write Example 2.10. Optional permissions for creating compute resources Microsoft.Compute/availabilitySets/delete Microsoft.Compute/images/read Microsoft.Compute/images/write Microsoft.Compute/images/delete Example 2.11. Optional permissions for enabling user-managed encryption Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/diskEncryptionSets/write Microsoft.Compute/diskEncryptionSets/delete Microsoft.KeyVault/vaults/read Microsoft.KeyVault/vaults/write Microsoft.KeyVault/vaults/delete Microsoft.KeyVault/vaults/deploy/action Microsoft.KeyVault/vaults/keys/read Microsoft.KeyVault/vaults/keys/write Microsoft.Features/providers/features/register/action Example 2.12. Optional permissions for installing a private cluster with Azure Network Address Translation (NAT) Microsoft.Network/natGateways/join/action Microsoft.Network/natGateways/read Microsoft.Network/natGateways/write Example 2.13. Optional permissions for installing a private cluster with Azure firewall Microsoft.Network/azureFirewalls/applicationRuleCollections/write Microsoft.Network/azureFirewalls/read Microsoft.Network/azureFirewalls/write Microsoft.Network/routeTables/join/action Microsoft.Network/routeTables/read Microsoft.Network/routeTables/routes/read Microsoft.Network/routeTables/routes/write Microsoft.Network/routeTables/write Microsoft.Network/virtualNetworks/peer/action Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write Example 2.14. Optional permission for running gather bootstrap Microsoft.Compute/virtualMachines/instanceView/read The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. You can use the same permissions to delete a private OpenShift Container Platform cluster on Azure. Example 2.15. Required permissions for deleting authorization resources Microsoft.Authorization/roleAssignments/delete Example 2.16. Required permissions for deleting compute resources Microsoft.Compute/disks/delete Microsoft.Compute/galleries/delete Microsoft.Compute/galleries/images/delete Microsoft.Compute/galleries/images/versions/delete Microsoft.Compute/virtualMachines/delete Example 2.17. Required permissions for deleting identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/delete Example 2.18. Required permissions for deleting network resources Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Microsoft.Network/loadBalancers/delete Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/delete Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete Microsoft.Network/publicIPAddresses/delete Microsoft.Network/virtualNetworks/delete Note The following permissions are not required to delete a private OpenShift Container Platform cluster on Azure. Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Example 2.19. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 2.20. Required permissions for deleting a resource group Microsoft.Resources/subscriptions/resourcegroups/delete Example 2.21. Required permissions for deleting storage resources Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Note To install OpenShift Container Platform on Azure, you must scope the permissions to your subscription. Later, you can re-scope these permissions to the installer created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. By default, the OpenShift Container Platform installation program assigns the Azure identity the Contributor role. You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster. 2.6. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. If you want to use a custom role, you have created a custom role with the required permissions listed in the Required Azure permissions for installer-provisioned infrastructure section. Procedure Log in to the Azure CLI: USD az login If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": "AzureCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": "AzureCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role <role_name> \ 1 --name <service_principal> \ 2 --scopes /subscriptions/<subscription_id> 3 1 Defines the role name. You can use the Contributor role, or you can specify a custom role which contains the necessary permissions. 2 Defines the service principal name. 3 Specifies the subscription ID. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. If you applied the Contributor role to your service principal, assign the User Administrator Access role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 1 Specify the appId parameter value for your service principal. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 2.7. Supported Azure Marketplace regions Installing a cluster using the Azure Marketplace image is available to customers who purchase the offer in North America and EMEA. While the offer must be purchased in North America or EMEA, you can deploy the cluster to any of the Azure public partitions that OpenShift Container Platform supports. Note Deploying a cluster using the Azure Marketplace image is not supported for the Azure Government regions. 2.8. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) mexicocentral (Mexico Central) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 2.9. steps Install an OpenShift Container Platform cluster on Azure. You can install a customized cluster or quickly install a cluster with default options.
|
[
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_azure/installing-azure-account
|
Appendix A. Reference material
|
Appendix A. Reference material A.1. About MTA command-line arguments The following is a detailed description of the available MTA command line arguments. Note To run the MTA command without prompting, for example when executing from a script, you must use the following arguments: --overwrite --input --target Example A.1. MTA CLI arguments Command Type Description --analyze-known-libraries Flag to analyze known open-source libraries. --bulk Flag for running multiple analyze commands in bulk, which result in a combined static report. --context-lines Integer Flag to define the number of lines of source code to include in the output for each incident (default: 100 ). -d , --dependency-folders String Array Flag for the directory for dependencies. --enable-default-rulesets Boolean Flag to run default rulesets with analysis (default: true ). -h , --help Flag to output help for analyze --http-proxy String Flag for Hyper Text Transfer Protocol (HTTP) proxy string URL --https-proxy String Flag for Hypertext Transfer Protocol Secure (HTTPS) proxy string URL --incident-selector String Flag to select incidents based on custom variables, for example, !package=io.konveyor.demo.config-utils -i , --input String Flag for the path to application source code or a binary. For more details, see Specifying the input . --jaeger-endpoint String Flag for the jaeger endpoint to collect traces. --json-output Flag to create analysis and dependency output as JSON. -l , --label-selector String Flag to run rules based on a specified label selector expression. --list-providers Flag to list available supported providers. --list-sources Flag to list rules for available migration sources. --list-targets Flag to list rules for available migration targets. --maven-settings string Flag path to a custom Maven settings file to use -m , --mode String Flag for analysis mode, this must be one of full , for source and dependencies , or source-only (default full ). --no-proxy String Flag to excluded URLs from passing through any proxy (relevant only with proxy) -o , --output String Flag for the path to the directory for analysis output. For more details, see Specifying the output directory . --override-provider-settings String Flag to override the provider settings. The analysis pod runs on the host network, and no providers are started. --overwrite Flag to overwrite the output directory. If you do not specify this argument and the --output directory exists, you are prompted to choose whether to overwrite the contents. --provider String Array Flag to specify which provider or providers to run. --rules String Array Flag to specify the filename or directory containing rule files. Use multiple times for additional rules, for example, --rules --rules ... . --run-local Local flag to run analysis directly on local system without containers (for Java and Maven) --skip-static-report Flag to not generate static report. -s , --source String Array Flag for the source technology to consider for analysis. Use multiple times for additional sources, for example, --source --source ... . For more details, see Setting the source technology . -t , --target String Array Flag for the target technology to consider for analysis. Use multiple times for additional targets, for example, --target --target ... . For more details, see Setting the target technology . A.1.1. Specifying the input A space-delimited list of the path to the file or directory containing one or more applications to be analyzed. This argument is required. Usage Depending on whether the input file type provided to the --input argument is a file or directory, it will be evaluated as follows depending on the additional arguments provided. Directory --sourceMode : The directory is evaluated as a single application. File --sourceMode : The file is evaluated as a compressed project. A.1.2. Specifying the output directory Specify the path to the directory to output the report information generated by MTA. Usage If omitted, the report will be generated in an <INPUT_ARCHIVE_OR_DIRECTORY>.report directory. If the output directory exists, you will be prompted with the following question with a default answer of N : However, if you specify the --overwrite argument, MTA will proceed to delete and recreate the directory. See the description of this argument for more information. A.1.3. Setting the source technology A space-delimited list of one or more source technologies, servers, platforms, or frameworks to migrate from. You can use this argument, in conjunction with the --target argument, to determine which rulesets are used. Use the --listSourceTechnologies argument to list all available sources. Usage The --source argument now provides version support, which follows the Maven version range syntax . This instructs MTA to only run the rulesets matching the specified versions, for example, --source eap:5 . Warning When migrating to JBoss EAP, be sure to specify the version, for example, eap:6 . Specifying only eap will run rulesets for all versions of JBoss EAP, including those not relevant to your migration path. See Supported migration paths in Introduction to the Migration Toolkit for Applications for the appropriate JBoss EAP version. A.1.4. Setting the target technology A space-delimited list of one or more target technologies, servers, platforms, or frameworks to migrate to. You can use this argument, in conjunction with the --source argument, to determine which rulesets are used. If you do not specify this option, you are prompted to select a target. Use the --listTargetTechnologies argument to list all available targets. Usage The --target argument now provides version support, which follows the Maven version range syntax . This instructs MTA to only run the rulesets matching the specified versions, for example, --target eap:7 . A.2. Supported technology tags The following technology tags are supported in MTA 7.1.1: 0MQ Client 3scale Acegi Security AcrIS Security ActiveMQ library Airframe Airlift Log Manager AKKA JTA Akka Testkit Amazon SQS Client AMQP Client Anakia AngularFaces ANTLR StringTemplate AOP Alliance Apache Accumulo Client Apache Aries Apache Commons JCS Apache Commons Validator Apache Flume Apache Geronimo Apache Hadoop Apache HBase Client Apache Ignite Apache Karaf Apache Mahout Apache Meecrowave JTA Apache Sirona JTA Apache Synapse Apache Tapestry Apiman Applet Arquillian AspectJ Atomikos JTA Avalon Logkit Axion Driver Axis Axis2 BabbageFaces Bean Validation BeanInject Blaze Blitz4j BootsFaces Bouncy Castle ButterFaces Cache API Cactus Camel Camel Messaging Client Camunda Cassandra Client CDI Cfg Engine Chunk Templates Cloudera Coherence Common Annotations Composite Logging Composite Logging JCL Concordion CSS Cucumber Dagger DbUnit Demoiselle JTA Derby Driver Drools DVSL Dynacache EAR Deployment Easy Rules EasyMock Eclipse RCP EclipseLink Ehcache EJB EJB XML Elasticsearch Entity Bean EtlUnit Eureka Everit JTA Evo JTA Feign File system Logging FormLayoutMaker FreeMarker Geronimo JTA GFC Logging GIN GlassFish JTA Google Guice Grails Grapht DI Guava Testing GWT H2 Driver Hamcrest Handlebars HavaRunner Hazelcast Hdiv Hibernate Hibernate Cfg Hibernate Mapping Hibernate OGM HighFaces HornetQ Client HSQLDB Driver HTTP Client HttpUnit ICEfaces Ickenham Ignite JTA Ikasan iLog Infinispan Injekt for Kotlin Iroh Istio Jamon Jasypt Java EE Batch Java EE Batch API Java EE JACC Java EE JAXB Java EE JAXR Java EE JSON-P Java Transaction API JavaFX JavaScript Javax Inject JAX-RS JAX-WS JayWire JBehave JBoss Cache JBoss EJB XML JBoss logging JBoss Transactions JBoss Web XML JBossMQ Client JBPM JCA Jcabi Log JCache JCunit JDBC JDBC datasources JDBC XA datasources Jersey Jetbrick Template Jetty JFreeChart JFunk JGoodies JMock JMockit JMS Connection Factory JMS Queue JMS Topic JMustache JNA JNI JNLP JPA entities JPA Matchers JPA named queries JPA XML JSecurity JSF JSF Page JSilver JSON-B JSP Page JSTL JTA Jukito JUnit Ka DI Keyczar Kibana KLogger Kodein Kotlin Logging KouInject KumuluzEE JTA LevelDB Client Liferay LiferayFaces Lift JTA Log.io Log4J Log4s Logback Logging Utils Logstash Lumberjack Macros Magicgrouplayout Mail Management EJB MapR MckoiSQLDB Driver Memcached Message (MDB) Micro DI Micrometer Microsoft SQL Driver MiGLayout MinLog Mixer Mockito MongoDB Client Monolog Morphia MRules Mule Mule Functional Test Framework MultithreadedTC Mycontainer JTA MyFaces MySQL Driver Narayana Arjuna Needle Neo4j NLOG4J Nuxeo JTA/JCA OACC OAUTH OCPsoft Logging Utils OmniFaces OpenFaces OpenPojo OpenSAML OpenWS OPS4J Pax Logging Service Oracle ADF Oracle DB Driver Oracle Forms Orion EJB XML Orion Web XML Oscache OTR4J OW2 JTA OW2 Log Util OWASP CSRF Guard OWASP ESAPI Peaberry Pega Persistence units Petals EIP PicketBox PicketLink PicoContainer Play Play Test Plexus Container Polyforms DI Portlet PostgreSQL Driver PowerMock PrimeFaces Properties Qpid Client RabbitMQ Client RandomizedTesting Runner Resource Adapter REST Assured Restito RichFaces RMI RocketMQ Client Rythm Template Engine SAML Santuario Scalate Scaldi Scribe Seam Security Realm ServiceMix Servlet ShiftOne Shiro Silk DI SLF4J Snippetory Template Engine SNMP4J Socket handler logging Spark Specsy Spock Spring Spring Batch Spring Boot Spring Boot Actuator Spring Boot Cache Spring Boot Flo Spring Cloud Config Spring Cloud Function Spring Data Spring Data JPA spring DI Spring Integration Spring JMX Spring Messaging Client Spring MVC Spring Properties Spring Scheduled Spring Security Spring Shell Spring Test Spring Transactions Spring Web SQLite Driver SSL Standard Widget Toolkit (SWT) Stateful (SFSB) Stateless (SLSB) Sticky Configured Stripes Struts SubCut Swagger SwarmCache Swing SwitchYard Syringe Talend ESB Teiid TensorFlow Test Interface TestNG Thymeleaf TieFaces tinylog Tomcat Tornado Inject Trimou Trunk JGuard Twirl Twitter Util Logging UberFire Unirest Unitils Vaadin Velocity Vlad Water Template Engine Web Services Metadata Web Session Web XML File WebLogic Web XML Webmacro WebSocket WebSphere EJB WebSphere EJB Ext WebSphere Web XML WebSphere WS Binding WebSphere WS Extension Weka Weld WF Core JTA Wicket Winter WSDL WSO2 WSS4J XACML XFire XMLUnit Zbus Client Zipkin A.3. About rule story points A.3.1. What are story points? Story points are an abstract metric commonly used in Agile software development to estimate the level of effort needed to implement a feature or change. The Migration Toolkit for Applications uses story points to express the level of effort needed to migrate particular application constructs, and the application as a whole. It does not necessarily translate to man-hours, but the value should be consistent across tasks. A.3.2. How story points are estimated in rules Estimating the level of effort for the story points for a rule can be tricky. The following are the general guidelines MTA uses when estimating the level of effort required for a rule. Level of Effort Story Points Description Information 0 An informational warning with very low or no priority for migration. Trivial 1 The migration is a trivial change or a simple library swap with no or minimal API changes. Complex 3 The changes required for the migration task are complex, but have a documented solution. Redesign 5 The migration task requires a redesign or a complete library change, with significant API changes. Rearchitecture 7 The migration requires a complete rearchitecture of the component or subsystem. Unknown 13 The migration solution is not known and may need a complete rewrite. A.3.3. Task category In addition to the level of effort, you can categorize migration tasks to indicate the severity of the task. The following categories are used to group issues to help prioritize the migration effort. Mandatory The task must be completed for a successful migration. If the changes are not made, the resulting application will not build or run successfully. Examples include replacement of proprietary APIs that are not supported in the target platform. Optional If the migration task is not completed, the application should work, but the results may not be optimal. If the change is not made at the time of migration, it is recommended to put it on the schedule soon after your migration is completed. Potential The task should be examined during the migration process, but there is not enough detailed information to determine if the task is mandatory for the migration to succeed. An example of this would be migrating a third-party proprietary type where there is no directly compatible type. Information The task is included to inform you of the existence of certain files. These may need to be examined or modified as part of the modernization effort, but changes are typically not required. For more information on categorizing tasks, see Using custom rule categories . A.4. Additional Resources A.4.1. Contributing to the project To help the Migration Toolkit for Applications cover most application constructs and server configurations, including yours, you can help with any of the following items: Send an email to [email protected] and let us know what MTA migration rules must cover. Provide example applications to test migration rules. Identify application components and problem areas that might be difficult to migrate: Write a short description of the problem migration areas. Write a brief overview describing how to solve the problem in migration areas. Try Migration Toolkit for Applications on your application. Make sure to report any issues you meet. Contribute to the Migration Toolkit for Applications rules repository: Write a Migration Toolkit for Applications rule to identify or automate a migration process. Create a test for the new rule. For more information, see Rule Development Guide . Contribute to the project source code: Create a core rule. Improve MTA performance or efficiency. Any level of involvement is greatly appreciated! A.4.2. Migration Toolkit for Applications development resources Use the following resources to learn and contribute to the Migration Toolkit for Applications development: MTA forums: https://developer.jboss.org/en/windup Jira issue tracker: https://issues.redhat.com/projects/MTA/issues MTA mailing list: [email protected] A.4.3. Reporting issues MTA uses Jira as its issue tracking system. If you encounter an issue executing MTA, submit a Jira issue . Revised on 2024-12-31 15:02:59 UTC
|
[
"--input <INPUT_ARCHIVE_OR_DIRECTORY> [...]",
"--output <OUTPUT_REPORT_DIRECTORY>",
"Overwrite all contents of \"/home/username/<OUTPUT_REPORT_DIRECTORY>\" (anything already in the directory will be deleted)? [y,N]",
"--source <SOURCE_1> <SOURCE_2>",
"--target <TARGET_1> <TARGET_2>"
] |
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/cli_guide/reference_material
|
7.77. gtk2
|
7.77. gtk2 7.77.1. RHBA-2013:0493 - gtk2 bug fix update Updated gtk2 packages that fix two bugs are now available for Red Hat Enterprise Linux 6. GIMP Toolkit (GTK+) is a multi-platform toolkit for creating graphical user interfaces. Bug Fixes BZ#882346 Due to a recent change in the behavior of one of the X.Org Server components, GTK+ applications could not use certain key combinations for key bindings. This update makes GTK+ compatible with the new behavior, which ensures that no regressions occur in applications that use the library. BZ#889172 Previously, when switching between the "Recently Used" and "Search" tabs in the "Open Files" dialog box, the "Size" column in the view disappeared. This update ensures the column is visible when the relevant option is selected. Users of GTK+ are advised to upgrade to these updated packages, which fix these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/gtk2
|
Chapter 11. Installing a cluster on Azure using ARM templates
|
Chapter 11. Installing a cluster on Azure using ARM templates In OpenShift Container Platform version 4.14, you can install a cluster on Microsoft Azure by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 11.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster. You downloaded the Azure CLI and installed it on your computer. See Install the Azure CLI in the Azure documentation. The following documentation was last tested using version 2.49.0 of the Azure CLI. Azure CLI commands might perform differently based on the version you use. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Alternatives to storing administrator-level secrets in the kube-system project . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 11.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 11.3. Configuring your Azure project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 11.3.1. Azure account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure components, and the default Azure subscription and service limits, quotas, and constraints affect your ability to install OpenShift Container Platform clusters. Important Default limits vary by offer category types, such as Free Trial and Pay-As-You-Go, and by series, such as Dv2, F, and G. For example, the default for Enterprise Agreement subscriptions is 350 cores. Check the limits for your subscription type and if necessary, increase quota limits for your account before you install a default cluster on Azure. The following table summarizes the Azure components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Default Azure limit Description vCPU 40 20 per region A default cluster requires 40 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap machine uses Standard_D4s_v3 machines, which use 4 vCPUs, the control plane machines use Standard_D8s_v3 virtual machines, which use 8 vCPUs, and the worker machines use Standard_D4s_v3 virtual machines, which use 4 vCPUs, a default cluster requires 40 vCPUs. The bootstrap node VM, which uses 4 vCPUs, is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. OS Disk 7 Each cluster machine must have a minimum of 100 GB of storage and 300 IOPS. While these are the minimum supported values, faster storage is recommended for production clusters and clusters with intensive workloads. For more information about optimizing storage for performance, see the page titled "Optimizing storage" in the "Scalability and performance" section. VNet 1 1000 per region Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 65,536 per region Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 5000 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 1000 per region Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 3 Each of the two public load balancers uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Spot VM vCPUs (optional) 0 If you configure spot VMs, your cluster must have two spot VM vCPUs for every compute node. 20 per region This is an optional component. To use spot VMs, you must increase the Azure default limit to at least twice the number of compute nodes in your cluster. Note Using spot VMs for control plane nodes is not recommended. Additional resources Optimizing storage 11.3.2. Configuring a public DNS zone in Azure To install OpenShift Container Platform, the Microsoft Azure account you use must have a dedicated public hosted DNS zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Azure or another source. Note For more information about purchasing domains through Azure, see Buy a custom domain name for Azure App Service in the Azure documentation. If you are using an existing domain and registrar, migrate its DNS to Azure. See Migrate an active DNS name to Azure App Service in the Azure documentation. Configure DNS for your domain. Follow the steps in the Tutorial: Host your domain in Azure DNS in the Azure documentation to create a public hosted zone for your domain or subdomain, extract the new authoritative name servers, and update the registrar records for the name servers that your domain uses. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. You can view Azure's DNS solution by visiting this example for creating DNS zones . 11.3.3. Increasing Azure account limits To increase an account limit, file a support request on the Azure portal. Note You can increase only one type of quota per support request. Procedure From the Azure portal, click Help + support in the lower left corner. Click New support request and then select the required values: From the Issue type list, select Service and subscription limits (quotas) . From the Subscription list, select the subscription to modify. From the Quota type list, select the quota to increase. For example, select Compute-VM (cores-vCPUs) subscription limit increases to increase the number of vCPUs, which is required to install a cluster. Click : Solutions . On the Problem Details page, provide the required information for your quota increase: Click Provide details and provide the required details in the Quota details window. In the SUPPORT METHOD and CONTACT INFO sections, provide the issue severity and your contact details. Click : Review + create and then click Create . 11.3.4. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 11.3.5. Recording the subscription and tenant IDs The installation program requires the subscription and tenant IDs that are associated with your Azure account. You can use the Azure CLI to gather this information. Prerequisites You have installed or updated the Azure CLI . Procedure Log in to the Azure CLI by running the following command: USD az login Ensure that you are using the right subscription: View a list of available subscriptions by running the following command: USD az account list --refresh Example output [ { "cloudName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } }, { "cloudName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": false, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } ] View the details of the active account, and confirm that this is the subscription you want to use, by running the following command: USD az account show Example output { "environmentName": "AzureCloud", "id": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 1", "state": "Enabled", "tenantId": "6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } If you are not using the right subscription: Change the active subscription by running the following command: USD az account set -s <subscription_id> Verify that you are using the subscription you need by running the following command: USD az account show Example output { "environmentName": "AzureCloud", "id": "9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "isDefault": true, "name": "Subscription Name 2", "state": "Enabled", "tenantId": "7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "user": { "name": "[email protected]", "type": "user" } } Record the id and tenantId parameter values from the output. You require these values to install an OpenShift Container Platform cluster. 11.3.6. Supported identities to access Azure resources An OpenShift Container Platform cluster requires an Azure identity to create and manage Azure resources. As such, you need one of the following types of identities to complete the installation: A service principal A system-assigned managed identity A user-assigned managed identity 11.3.7. Required Azure permissions for user-provisioned infrastructure The installation program requires access to an Azure service principal or managed identity with the necessary permissions to deploy the cluster and to maintain its daily operation. These permissions must be granted to the Azure subscription that is associated with the identity. The following options are available to you: You can assign the identity the Contributor and User Access Administrator roles. Assigning these roles is the quickest way to grant all of the required permissions. For more information about assigning roles, see the Azure documentation for managing access to Azure resources using the Azure portal . If your organization's security policies require a more restrictive set of permissions, you can create a custom role with the necessary permissions. The following permissions are required for creating an OpenShift Container Platform cluster on Microsoft Azure. Example 11.1. Required permissions for creating authorization resources Microsoft.Authorization/policies/audit/action Microsoft.Authorization/policies/auditIfNotExists/action Microsoft.Authorization/roleAssignments/read Microsoft.Authorization/roleAssignments/write Example 11.2. Required permissions for creating compute resources Microsoft.Compute/images/read Microsoft.Compute/images/write Microsoft.Compute/images/delete Microsoft.Compute/availabilitySets/read Microsoft.Compute/disks/beginGetAccess/action Microsoft.Compute/disks/delete Microsoft.Compute/disks/read Microsoft.Compute/disks/write Microsoft.Compute/galleries/images/read Microsoft.Compute/galleries/images/versions/read Microsoft.Compute/galleries/images/versions/write Microsoft.Compute/galleries/images/write Microsoft.Compute/galleries/read Microsoft.Compute/galleries/write Microsoft.Compute/snapshots/read Microsoft.Compute/snapshots/write Microsoft.Compute/snapshots/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/virtualMachines/powerOff/action Microsoft.Compute/virtualMachines/read Microsoft.Compute/virtualMachines/write Microsoft.Compute/virtualMachines/deallocate/action Example 11.3. Required permissions for creating identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/assign/action Microsoft.ManagedIdentity/userAssignedIdentities/read Microsoft.ManagedIdentity/userAssignedIdentities/write Example 11.4. Required permissions for creating network resources Microsoft.Network/dnsZones/A/write Microsoft.Network/dnsZones/CNAME/write Microsoft.Network/dnszones/CNAME/read Microsoft.Network/dnszones/read Microsoft.Network/loadBalancers/backendAddressPools/join/action Microsoft.Network/loadBalancers/backendAddressPools/read Microsoft.Network/loadBalancers/backendAddressPools/write Microsoft.Network/loadBalancers/read Microsoft.Network/loadBalancers/write Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkInterfaces/join/action Microsoft.Network/networkInterfaces/read Microsoft.Network/networkInterfaces/write Microsoft.Network/networkSecurityGroups/join/action Microsoft.Network/networkSecurityGroups/read Microsoft.Network/networkSecurityGroups/securityRules/delete Microsoft.Network/networkSecurityGroups/securityRules/read Microsoft.Network/networkSecurityGroups/securityRules/write Microsoft.Network/networkSecurityGroups/write Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/A/write Microsoft.Network/privateDnsZones/A/delete Microsoft.Network/privateDnsZones/SOA/read Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/read Microsoft.Network/privateDnsZones/virtualNetworkLinks/write Microsoft.Network/privateDnsZones/write Microsoft.Network/publicIPAddresses/delete Microsoft.Network/publicIPAddresses/join/action Microsoft.Network/publicIPAddresses/read Microsoft.Network/publicIPAddresses/write Microsoft.Network/virtualNetworks/join/action Microsoft.Network/virtualNetworks/read Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read Microsoft.Network/virtualNetworks/subnets/write Microsoft.Network/virtualNetworks/write Example 11.5. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/InProgress/action Microsoft.Resourcehealth/healthevent/Pending/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 11.6. Required permissions for creating a resource group Microsoft.Resources/subscriptions/resourceGroups/read Microsoft.Resources/subscriptions/resourcegroups/write Example 11.7. Required permissions for creating resource tags Microsoft.Resources/tags/write Example 11.8. Required permissions for creating storage resources Microsoft.Storage/storageAccounts/blobServices/read Microsoft.Storage/storageAccounts/blobServices/containers/write Microsoft.Storage/storageAccounts/fileServices/read Microsoft.Storage/storageAccounts/fileServices/shares/read Microsoft.Storage/storageAccounts/fileServices/shares/write Microsoft.Storage/storageAccounts/fileServices/shares/delete Microsoft.Storage/storageAccounts/listKeys/action Microsoft.Storage/storageAccounts/read Microsoft.Storage/storageAccounts/write Example 11.9. Required permissions for creating deployments Microsoft.Resources/deployments/read Microsoft.Resources/deployments/write Microsoft.Resources/deployments/validate/action Microsoft.Resources/deployments/operationstatuses/read Example 11.10. Optional permissions for creating compute resources Microsoft.Compute/availabilitySets/delete Microsoft.Compute/availabilitySets/write Example 11.11. Optional permissions for creating marketplace virtual machine resources Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/read Microsoft.MarketplaceOrdering/offertypes/publishers/offers/plans/agreements/write Example 11.12. Optional permissions for enabling user-managed encryption Microsoft.Compute/diskEncryptionSets/read Microsoft.Compute/diskEncryptionSets/write Microsoft.Compute/diskEncryptionSets/delete Microsoft.KeyVault/vaults/read Microsoft.KeyVault/vaults/write Microsoft.KeyVault/vaults/delete Microsoft.KeyVault/vaults/deploy/action Microsoft.KeyVault/vaults/keys/read Microsoft.KeyVault/vaults/keys/write Microsoft.Features/providers/features/register/action The following permissions are required for deleting an OpenShift Container Platform cluster on Microsoft Azure. Example 11.13. Required permissions for deleting authorization resources Microsoft.Authorization/roleAssignments/delete Example 11.14. Required permissions for deleting compute resources Microsoft.Compute/disks/delete Microsoft.Compute/galleries/delete Microsoft.Compute/galleries/images/delete Microsoft.Compute/galleries/images/versions/delete Microsoft.Compute/virtualMachines/delete Microsoft.Compute/images/delete Example 11.15. Required permissions for deleting identity management resources Microsoft.ManagedIdentity/userAssignedIdentities/delete Example 11.16. Required permissions for deleting network resources Microsoft.Network/dnszones/read Microsoft.Network/dnsZones/A/read Microsoft.Network/dnsZones/A/delete Microsoft.Network/dnsZones/CNAME/read Microsoft.Network/dnsZones/CNAME/delete Microsoft.Network/loadBalancers/delete Microsoft.Network/networkInterfaces/delete Microsoft.Network/networkSecurityGroups/delete Microsoft.Network/privateDnsZones/read Microsoft.Network/privateDnsZones/A/read Microsoft.Network/privateDnsZones/delete Microsoft.Network/privateDnsZones/virtualNetworkLinks/delete Microsoft.Network/publicIPAddresses/delete Microsoft.Network/virtualNetworks/delete Example 11.17. Required permissions for checking the health of resources Microsoft.Resourcehealth/healthevent/Activated/action Microsoft.Resourcehealth/healthevent/Resolved/action Microsoft.Resourcehealth/healthevent/Updated/action Example 11.18. Required permissions for deleting a resource group Microsoft.Resources/subscriptions/resourcegroups/delete Example 11.19. Required permissions for deleting storage resources Microsoft.Storage/storageAccounts/delete Microsoft.Storage/storageAccounts/listKeys/action Note To install OpenShift Container Platform on Azure, you must scope the permissions related to resource group creation to your subscription. After the resource group is created, you can scope the rest of the permissions to the created resource group. If the public DNS zone is present in a different resource group, then the network DNS zone related permissions must always be applied to your subscription. You can scope all the permissions to your subscription when deleting an OpenShift Container Platform cluster. 11.3.8. Using Azure managed identities The installation program requires an Azure identity to complete the installation. You can use either a system-assigned or user-assigned managed identity. If you are unable to use a managed identity, you can use a service principal. Procedure If you are using a system-assigned managed identity, enable it on the virtual machine that you will run the installation program from. If you are using a user-assigned managed identity: Assign it to the virtual machine that you will run the installation program from. Record its client ID. You require this value when installing the cluster. For more information about viewing the details of a user-assigned managed identity, see the Microsoft Azure documentation for listing user-assigned managed identities . Verify that the required permissions are assigned to the managed identity. 11.3.9. Creating a service principal The installation program requires an Azure identity to complete the installation. You can use a service principal. If you are unable to use a service principal, you can use a managed identity. Prerequisites You have installed or updated the Azure CLI . You have an Azure subscription ID. If you are not going to assign the Contributor and User Administrator Access roles to the service principal, you have created a custom role with the required Azure permissions. Procedure Create the service principal for your account by running the following command: USD az ad sp create-for-rbac --role <role_name> \ 1 --name <service_principal> \ 2 --scopes /subscriptions/<subscription_id> 3 1 Defines the role name. You can use the Contributor role, or you can specify a custom role which contains the necessary permissions. 2 Defines the service principal name. 3 Specifies the subscription ID. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" } Record the values of the appId and password parameters from the output. You require these values when installing the cluster. If you applied the Contributor role to your service principal, assign the User Administrator Access role by running the following command: USD az role assignment create --role "User Access Administrator" \ --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2 1 Specify the appId parameter value for your service principal. 2 Specifies the subscription ID. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 11.3.10. Supported Azure regions The installation program dynamically generates the list of available Microsoft Azure regions based on your subscription. Supported Azure public regions australiacentral (Australia Central) australiaeast (Australia East) australiasoutheast (Australia South East) brazilsouth (Brazil South) canadacentral (Canada Central) canadaeast (Canada East) centralindia (Central India) centralus (Central US) eastasia (East Asia) eastus (East US) eastus2 (East US 2) francecentral (France Central) germanywestcentral (Germany West Central) israelcentral (Israel Central) italynorth (Italy North) japaneast (Japan East) japanwest (Japan West) koreacentral (Korea Central) koreasouth (Korea South) mexicocentral (Mexico Central) newzealandnorth (New Zealand North) northcentralus (North Central US) northeurope (North Europe) norwayeast (Norway East) polandcentral (Poland Central) qatarcentral (Qatar Central) southafricanorth (South Africa North) southcentralus (South Central US) southeastasia (Southeast Asia) southindia (South India) spaincentral (Spain Central) swedencentral (Sweden Central) switzerlandnorth (Switzerland North) uaenorth (UAE North) uksouth (UK South) ukwest (UK West) westcentralus (West Central US) westeurope (West Europe) westindia (West India) westus (West US) westus2 (West US 2) westus3 (West US 3) Supported Azure Government regions Support for the following Microsoft Azure Government (MAG) regions was added in OpenShift Container Platform version 4.6: usgovtexas (US Gov Texas) usgovvirginia (US Gov Virginia) You can reference all available MAG regions in the Azure documentation . Other provided MAG regions are expected to work with OpenShift Container Platform, but have not been tested. 11.4. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 11.4.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 11.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 11.4.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 11.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 11.4.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 11.20. Machine types based on 64-bit x86 architecture standardBSFamily standardDADSv5Family standardDASv4Family standardDASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHCSFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 11.4.4. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 11.21. Machine types based on 64-bit ARM architecture standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family 11.5. Using the Azure Marketplace offering Using the Azure Marketplace offering lets you deploy an OpenShift Container Platform cluster, which is billed on pay-per-use basis (hourly, per core) through Azure, while still being supported directly by Red Hat. To deploy an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700 Note Regardless of the version of OpenShift Container Platform that you install, the correct version of the Azure Marketplace image to use is 4.13. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. If you use the Azure Resource Manager (ARM) template to deploy your worker nodes: Update storageProfile.imageReference by deleting the id parameter and adding the offer , publisher , sku , and version parameters by using the values from your offer. Specify a plan for the virtual machines (VMs). Example 06_workers.json ARM template with an updated storageProfile.imageReference object and a specified plan ... "plan" : { "name": "rh-ocp-worker", "product": "rh-ocp-worker", "publisher": "redhat" }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { ... "storageProfile": { "imageReference": { "offer": "rh-ocp-worker", "publisher": "redhat", "sku": "rh-ocp-worker", "version": "413.92.2023101700" } ... } ... } 11.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 11.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 11.8. Creating the installation files for Azure To install OpenShift Container Platform on Microsoft Azure using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 11.8.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 11.8.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, leave this value blank. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on Azure". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. 11.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 11.8.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into, for example centralus . This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the public DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 11.8.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 11.9. Creating the Azure resource group You must create a Microsoft Azure resource group and an identity for that resource group. These are both used during the installation of your OpenShift Container Platform cluster on Azure. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} Create an Azure identity for the resource group: USD az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity This is used to grant the required access to Operators in your cluster. For example, this allows the Ingress Operator to create a public IP and its load balancer. You must assign the Azure identity to a role. Grant the Contributor role to the Azure identity: Export the following variables required by the Azure role assignment: USD export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv` USD export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv` Assign the Contributor role to the identity: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role 'Contributor' --scope "USD{RESOURCE_GROUP_ID}" Note If you want to assign a custom role with all the required permissions to the identity, run the following command: USD az role assignment create --assignee "USD{PRINCIPAL_ID}" --role <custom_role> \ 1 --scope "USD{RESOURCE_GROUP_ID}" 1 Specifies the custom role name. 11.10. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally. You must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Export the URL of the RHCOS VHD to an environment variable: USD export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.<architecture>."rhel-coreos-extensions"."azure-disk".url'` where: <architecture> Specifies the architecture, valid values include x86_64 or aarch64 . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Create the storage container for the VHD: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} Copy the local VHD to a blob: USD az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob "rhcos.vhd" --destination-container vhd --source-uri "USD{VHD_URL}" Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 11.11. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure's DNS solution is used, so you will create a new public DNS zone for external (internet) visibility and a private DNS zone for internal cluster resolution. Note The public DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the public DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new public DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a public DNS zone that already exists. Create the private DNS zone in the same resource group as the rest of this deployment: USD az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can learn more about configuring a public DNS zone in Azure by visiting that section. 11.12. Creating a VNet in Azure You must create a virtual network (VNet) in Microsoft Azure for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Link the VNet template to the private DNS zone: USD az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v "USD{INFRA_ID}-vnet" -e false 11.12.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 11.22. 01_vnet.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", "addressPrefix" : "10.0.0.0/16", "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", "masterSubnetPrefix" : "10.0.0.0/24", "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", "nodeSubnetPrefix" : "10.0.1.0/24", "clusterNsgName" : "[concat(parameters('baseName'), '-nsg')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/virtualNetworks", "name" : "[variables('virtualNetworkName')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]" ], "properties" : { "addressSpace" : { "addressPrefixes" : [ "[variables('addressPrefix')]" ] }, "subnets" : [ { "name" : "[variables('masterSubnetName')]", "properties" : { "addressPrefix" : "[variables('masterSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } }, { "name" : "[variables('nodeSubnetName')]", "properties" : { "addressPrefix" : "[variables('nodeSubnetPrefix')]", "serviceEndpoints": [], "networkSecurityGroup" : { "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]" } } } ] } }, { "type" : "Microsoft.Network/networkSecurityGroups", "name" : "[variables('clusterNsgName')]", "apiVersion" : "2018-10-01", "location" : "[variables('location')]", "properties" : { "securityRules" : [ { "name" : "apiserver_in", "properties" : { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "6443", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 101, "direction" : "Inbound" } } ] } } ] } 11.13. Deploying the RHCOS cluster image for the Azure infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters storageAccount="USD{CLUSTER_NAME}sa" \ 3 --parameters architecture="<architecture>" 4 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of your Azure storage account. 4 Specify the system architecture. Valid values are x64 (default) or Arm64 . 11.13.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 11.23. 02_storage.json ARM template { "USDschema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "architecture": { "type": "string", "metadata": { "description": "The architecture of the Virtual Machines" }, "defaultValue": "x64", "allowedValues": [ "Arm64", "x64" ] }, "baseName": { "type": "string", "minLength": 1, "metadata": { "description": "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "storageAccount": { "type": "string", "metadata": { "description": "The Storage Account name" } }, "vhdBlobURL": { "type": "string", "metadata": { "description": "URL pointing to the blob where the VHD to be used to create master and worker machines is located" } } }, "variables": { "location": "[resourceGroup().location]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName": "[parameters('baseName')]", "imageNameGen2": "[concat(parameters('baseName'), '-gen2')]", "imageRelease": "1.0.0" }, "resources": [ { "apiVersion": "2021-10-01", "type": "Microsoft.Compute/galleries", "name": "[variables('galleryName')]", "location": "[variables('location')]", "resources": [ { "apiVersion": "2021-10-01", "type": "images", "name": "[variables('imageName')]", "location": "[variables('location')]", "dependsOn": [ "[variables('galleryName')]" ], "properties": { "architecture": "[parameters('architecture')]", "hyperVGeneration": "V1", "identifier": { "offer": "rhcos", "publisher": "RedHat", "sku": "basic" }, "osState": "Generalized", "osType": "Linux" }, "resources": [ { "apiVersion": "2021-10-01", "type": "versions", "name": "[variables('imageRelease')]", "location": "[variables('location')]", "dependsOn": [ "[variables('imageName')]" ], "properties": { "publishingProfile": { "storageAccountType": "Standard_LRS", "targetRegions": [ { "name": "[variables('location')]", "regionalReplicaCount": "1" } ] }, "storageProfile": { "osDiskImage": { "source": { "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]", "uri": "[parameters('vhdBlobURL')]" } } } } } ] }, { "apiVersion": "2021-10-01", "type": "images", "name": "[variables('imageNameGen2')]", "location": "[variables('location')]", "dependsOn": [ "[variables('galleryName')]" ], "properties": { "architecture": "[parameters('architecture')]", "hyperVGeneration": "V2", "identifier": { "offer": "rhcos-gen2", "publisher": "RedHat-gen2", "sku": "gen2" }, "osState": "Generalized", "osType": "Linux" }, "resources": [ { "apiVersion": "2021-10-01", "type": "versions", "name": "[variables('imageRelease')]", "location": "[variables('location')]", "dependsOn": [ "[variables('imageNameGen2')]" ], "properties": { "publishingProfile": { "storageAccountType": "Standard_LRS", "targetRegions": [ { "name": "[variables('location')]", "regionalReplicaCount": "1" } ] }, "storageProfile": { "osDiskImage": { "source": { "id": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]", "uri": "[parameters('vhdBlobURL')]" } } } } } ] } ] } ] } 11.14. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 11.14.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 11.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 11.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 11.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 11.15. Creating networking and load balancing components in Azure You must configure networking and load balancing in Microsoft Azure for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters privateDNSZoneName="USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The name of the private DNS zone. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record in the public zone for the API public load balancer. The USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the public DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Create the api DNS record in a new public zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing public zone, you can create the api DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 11.15.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 11.24. 03_infra.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "privateDNSZoneName" : { "type" : "string", "metadata" : { "description" : "Name of the private DNS zone" } } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]", "masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]", "skuName": "Standard" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('masterPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('masterPublicIpAddressName')]" } } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('masterLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "dependsOn" : [ "[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]" ], "properties" : { "frontendIPConfigurations" : [ { "name" : "public-lb-ip-v4", "properties" : { "publicIPAddress" : { "id" : "[variables('masterPublicIpAddressID')]" } } } ], "backendAddressPools" : [ { "name" : "[variables('masterLoadBalancerName')]" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip-v4')]" }, "backendAddressPool" : { "id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, "protocol" : "Tcp", "loadDistribution" : "Default", "idleTimeoutInMinutes" : 30, "frontendPort" : 6443, "backendPort" : 6443, "probe" : { "id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/loadBalancers", "name" : "[variables('internalLoadBalancerName')]", "location" : "[variables('location')]", "sku": { "name": "[variables('skuName')]" }, "properties" : { "frontendIPConfigurations" : [ { "name" : "internal-lb-ip", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "privateIPAddressVersion" : "IPv4" } } ], "backendAddressPools" : [ { "name" : "internal-lb-backend" } ], "loadBalancingRules" : [ { "name" : "api-internal", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 6443, "backendPort" : 6443, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]" } } }, { "name" : "sint", "properties" : { "frontendIPConfiguration" : { "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" }, "frontendPort" : 22623, "backendPort" : 22623, "enableFloatingIP" : false, "idleTimeoutInMinutes" : 30, "protocol" : "Tcp", "enableTcpReset" : false, "loadDistribution" : "Default", "backendAddressPool" : { "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" }, "probe" : { "id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]" } } } ], "probes" : [ { "name" : "api-internal-probe", "properties" : { "protocol" : "Https", "port" : 6443, "requestPath": "/readyz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } }, { "name" : "sint-probe", "properties" : { "protocol" : "Https", "port" : 22623, "requestPath": "/healthz", "intervalInSeconds" : 10, "numberOfProbes" : 3 } } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } }, { "apiVersion": "2018-09-01", "type": "Microsoft.Network/privateDnsZones/A", "name": "[concat(parameters('privateDNSZoneName'), '/api-int')]", "location" : "[variables('location')]", "dependsOn" : [ "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" ], "properties": { "ttl": 60, "aRecords": [ { "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" } ] } } ] } 11.16. Creating the bootstrap machine in Azure You must create the bootstrap machine in Microsoft Azure to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the bootstrap URL variable: USD bootstrap_url_expiry=`date -u -d "10 hours" '+%Y-%m-%dT%H:%MZ'` USD export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv` Export the bootstrap ignition variable: USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The bootstrap Ignition content for the bootstrap cluster. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 11.16.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 11.25. 04_bootstrap.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "bootstrapIgnition" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Bootstrap ignition content for the bootstrap cluster" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "bootstrapVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the Bootstrap Virtual Machine" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "vmName" : "[concat(parameters('baseName'), '-bootstrap')]", "nicName" : "[concat(variables('vmName'), '-nic')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "clusterNsgName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]", "sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]" }, "resources" : [ { "apiVersion" : "2018-12-01", "type" : "Microsoft.Network/publicIPAddresses", "name" : "[variables('sshPublicIpAddressName')]", "location" : "[variables('location')]", "sku": { "name": "Standard" }, "properties" : { "publicIPAllocationMethod" : "Static", "dnsSettings" : { "domainNameLabel" : "[variables('sshPublicIpAddressName')]" } } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[variables('nicName')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" ], "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "publicIPAddress": { "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" }, "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmName')]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('bootstrapVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmName')]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('bootstrapIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmName'),'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : 100 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" } ] } } }, { "apiVersion" : "2018-06-01", "type": "Microsoft.Network/networkSecurityGroups/securityRules", "name" : "[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]", "location" : "[variables('location')]", "dependsOn" : [ "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" ], "properties": { "protocol" : "Tcp", "sourcePortRange" : "*", "destinationPortRange" : "22", "sourceAddressPrefix" : "*", "destinationAddressPrefix" : "*", "access" : "Allow", "priority" : 100, "direction" : "Inbound" } } ] } 11.17. Creating the control plane machines in Azure You must create the control plane machines in Microsoft Azure for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's Azure Resource Manager (ARM) template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The Ignition content for the control plane nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 11.17.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 11.26. 05_masters.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "masterIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the master nodes" } }, "numberOfMasters" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift masters to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "privateDNSZoneName" : { "type" : "string", "defaultValue" : "", "metadata" : { "description" : "unused" } }, "masterVMSize" : { "type" : "string", "defaultValue" : "Standard_D8s_v3", "metadata" : { "description" : "The size of the Master Virtual Machines" } }, "diskSizeGB" : { "type" : "int", "defaultValue" : 1024, "metadata" : { "description" : "Size of the Master VM OS disk, in GB" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "masterSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]", "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", "masterLoadBalancerName" : "[parameters('baseName')]", "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", "sshKeyPath" : "/home/core/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfMasters')]", "input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]" } ] }, "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "copy" : { "name" : "nicCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('masterSubnetRef')]" }, "loadBalancerBackendAddressPools" : [ { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]" }, { "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" } ] } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "copy" : { "name" : "vmCopy", "count" : "[length(variables('vmNames'))]" }, "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('masterVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "core", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('masterIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "caching": "ReadOnly", "writeAcceleratorEnabled": false, "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB" : "[parameters('diskSizeGB')]" } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": false } } ] } } } ] } 11.18. Wait for bootstrap completion and remove bootstrap resources in Azure After you create all of the required infrastructure in Microsoft Azure, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 11.19. Creating additional worker machines in Azure You can create worker machines in Microsoft Azure for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. Note If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. Note By default, Microsoft Azure places control plane machines and compute machines in a pre-set availability zone. You can manually set an availability zone for a compute node or control plane node. To do this, modify a vendor's ARM template by specifying each of your availability zones in the zones parameter of the virtual machine resource. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure. Create and configure networking and load balancers in Azure. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The Ignition content for the worker nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 11.19.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 11.27. 06_workers.json ARM template { "USDschema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "parameters" : { "baseName" : { "type" : "string", "minLength" : 1, "metadata" : { "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" } }, "vnetBaseName": { "type": "string", "defaultValue": "", "metadata" : { "description" : "The specific customer vnet's base name (optional)" } }, "workerIgnition" : { "type" : "string", "metadata" : { "description" : "Ignition content for the worker nodes" } }, "numberOfNodes" : { "type" : "int", "defaultValue" : 3, "minValue" : 2, "maxValue" : 30, "metadata" : { "description" : "Number of OpenShift compute nodes to deploy" } }, "sshKeyData" : { "type" : "securestring", "defaultValue" : "Unused", "metadata" : { "description" : "Unused" } }, "nodeVMSize" : { "type" : "string", "defaultValue" : "Standard_D4s_v3", "metadata" : { "description" : "The size of the each Node Virtual Machine" } }, "hyperVGen": { "type": "string", "metadata": { "description": "VM generation image to use" }, "defaultValue": "V2", "allowedValues": [ "V1", "V2" ] } }, "variables" : { "location" : "[resourceGroup().location]", "virtualNetworkName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]", "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", "nodeSubnetName" : "[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]", "nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]", "infraLoadBalancerName" : "[parameters('baseName')]", "sshKeyPath" : "/home/capi/.ssh/authorized_keys", "identityName" : "[concat(parameters('baseName'), '-identity')]", "galleryName": "[concat('gallery_', replace(parameters('baseName'), '-', '_'))]", "imageName" : "[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]", "copy" : [ { "name" : "vmNames", "count" : "[parameters('numberOfNodes')]", "input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]" } ] }, "resources" : [ { "apiVersion" : "2019-05-01", "name" : "[concat('node', copyIndex())]", "type" : "Microsoft.Resources/deployments", "copy" : { "name" : "nodeCopy", "count" : "[length(variables('vmNames'))]" }, "properties" : { "mode" : "Incremental", "template" : { "USDschema" : "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion" : "1.0.0.0", "resources" : [ { "apiVersion" : "2018-06-01", "type" : "Microsoft.Network/networkInterfaces", "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", "location" : "[variables('location')]", "properties" : { "ipConfigurations" : [ { "name" : "pipConfig", "properties" : { "privateIPAllocationMethod" : "Dynamic", "subnet" : { "id" : "[variables('nodeSubnetRef')]" } } } ] } }, { "apiVersion" : "2018-06-01", "type" : "Microsoft.Compute/virtualMachines", "name" : "[variables('vmNames')[copyIndex()]]", "location" : "[variables('location')]", "tags" : { "kubernetes.io-cluster-ffranzupi": "owned" }, "identity" : { "type" : "userAssigned", "userAssignedIdentities" : { "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} } }, "dependsOn" : [ "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" ], "properties" : { "hardwareProfile" : { "vmSize" : "[parameters('nodeVMSize')]" }, "osProfile" : { "computerName" : "[variables('vmNames')[copyIndex()]]", "adminUsername" : "capi", "adminPassword" : "NotActuallyApplied!", "customData" : "[parameters('workerIgnition')]", "linuxConfiguration" : { "disablePasswordAuthentication" : false } }, "storageProfile" : { "imageReference": { "id": "[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]" }, "osDisk" : { "name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]", "osType" : "Linux", "createOption" : "FromImage", "managedDisk": { "storageAccountType": "Premium_LRS" }, "diskSizeGB": 128 } }, "networkProfile" : { "networkInterfaces" : [ { "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", "properties": { "primary": true } } ] } } } ] } } } ] } 11.20. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 11.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 11.22. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 11.23. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the public DNS zone. If you are adding this cluster to a new public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing public zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 Add a *.apps record to the private DNS zone: Create a *.apps record by using the following command: USD az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300 Add the *.apps record to the private DNS zone by using the following command: USD az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 11.24. Completing an Azure installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 11.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service
|
[
"az login",
"az account list --refresh",
"[ { \"cloudName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }, { \"cloudName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": false, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 1\", \"state\": \"Enabled\", \"tenantId\": \"6xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id>",
"az account show",
"{ \"environmentName\": \"AzureCloud\", \"id\": \"9xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"isDefault\": true, \"name\": \"Subscription Name 2\", \"state\": \"Enabled\", \"tenantId\": \"7xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role <role_name> \\ 1 --name <service_principal> \\ 2 --scopes /subscriptions/<subscription_id> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"axxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" }",
"az role assignment create --role \"User Access Administrator\" --assignee-object-id USD(az ad sp show --id <appId> --query id -o tsv) 1 --scope /subscriptions/<subscription_id> 2",
"az vm image list --all --offer rh-ocp-worker --publisher redhat -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700",
"az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table",
"Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:413.92.2023101700 413.92.2023101700 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:413.92.2023101700 413.92.2023101700",
"az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>",
"az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>",
"\"plan\" : { \"name\": \"rh-ocp-worker\", \"product\": \"rh-ocp-worker\", \"publisher\": \"redhat\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"storageProfile\": { \"imageReference\": { \"offer\": \"rh-ocp-worker\", \"publisher\": \"redhat\", \"sku\": \"rh-ocp-worker\", \"version\": \"413.92.2023101700\" } } }",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az identity create -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity",
"export PRINCIPAL_ID=`az identity show -g USD{RESOURCE_GROUP} -n USD{INFRA_ID}-identity --query principalId --out tsv`",
"export RESOURCE_GROUP_ID=`az group show -g USD{RESOURCE_GROUP} --query id --out tsv`",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role 'Contributor' --scope \"USD{RESOURCE_GROUP_ID}\"",
"az role assignment create --assignee \"USD{PRINCIPAL_ID}\" --role <custom_role> \\ 1 --scope \"USD{RESOURCE_GROUP_ID}\"",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export VHD_URL=`openshift-install coreos print-stream-json | jq -r '.architectures.<architecture>.\"rhel-coreos-extensions\".\"azure-disk\".url'`",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob copy start --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} --destination-blob \"rhcos.vhd\" --destination-container vhd --source-uri \"USD{VHD_URL}\"",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az network private-dns zone create -g USD{RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"az network private-dns link vnet create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n USD{INFRA_ID}-network-link -v \"USD{INFRA_ID}-vnet\" -e false",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(parameters('baseName'), '-vnet')]\", \"addressPrefix\" : \"10.0.0.0/16\", \"masterSubnetName\" : \"[concat(parameters('baseName'), '-master-subnet')]\", \"masterSubnetPrefix\" : \"10.0.0.0/24\", \"nodeSubnetName\" : \"[concat(parameters('baseName'), '-worker-subnet')]\", \"nodeSubnetPrefix\" : \"10.0.1.0/24\", \"clusterNsgName\" : \"[concat(parameters('baseName'), '-nsg')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/virtualNetworks\", \"name\" : \"[variables('virtualNetworkName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/networkSecurityGroups/', variables('clusterNsgName'))]\" ], \"properties\" : { \"addressSpace\" : { \"addressPrefixes\" : [ \"[variables('addressPrefix')]\" ] }, \"subnets\" : [ { \"name\" : \"[variables('masterSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('masterSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } }, { \"name\" : \"[variables('nodeSubnetName')]\", \"properties\" : { \"addressPrefix\" : \"[variables('nodeSubnetPrefix')]\", \"serviceEndpoints\": [], \"networkSecurityGroup\" : { \"id\" : \"[resourceId('Microsoft.Network/networkSecurityGroups', variables('clusterNsgName'))]\" } } } ] } }, { \"type\" : \"Microsoft.Network/networkSecurityGroups\", \"name\" : \"[variables('clusterNsgName')]\", \"apiVersion\" : \"2018-10-01\", \"location\" : \"[variables('location')]\", \"properties\" : { \"securityRules\" : [ { \"name\" : \"apiserver_in\", \"properties\" : { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"6443\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 101, \"direction\" : \"Inbound\" } } ] } } ] }",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters storageAccount=\"USD{CLUSTER_NAME}sa\" \\ 3 --parameters architecture=\"<architecture>\" 4",
"{ \"USDschema\": \"https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#\", \"contentVersion\": \"1.0.0.0\", \"parameters\": { \"architecture\": { \"type\": \"string\", \"metadata\": { \"description\": \"The architecture of the Virtual Machines\" }, \"defaultValue\": \"x64\", \"allowedValues\": [ \"Arm64\", \"x64\" ] }, \"baseName\": { \"type\": \"string\", \"minLength\": 1, \"metadata\": { \"description\": \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"storageAccount\": { \"type\": \"string\", \"metadata\": { \"description\": \"The Storage Account name\" } }, \"vhdBlobURL\": { \"type\": \"string\", \"metadata\": { \"description\": \"URL pointing to the blob where the VHD to be used to create master and worker machines is located\" } } }, \"variables\": { \"location\": \"[resourceGroup().location]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\": \"[parameters('baseName')]\", \"imageNameGen2\": \"[concat(parameters('baseName'), '-gen2')]\", \"imageRelease\": \"1.0.0\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"Microsoft.Compute/galleries\", \"name\": \"[variables('galleryName')]\", \"location\": \"[variables('location')]\", \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageName')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V1\", \"identifier\": { \"offer\": \"rhcos\", \"publisher\": \"RedHat\", \"sku\": \"basic\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageName')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] }, { \"apiVersion\": \"2021-10-01\", \"type\": \"images\", \"name\": \"[variables('imageNameGen2')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('galleryName')]\" ], \"properties\": { \"architecture\": \"[parameters('architecture')]\", \"hyperVGeneration\": \"V2\", \"identifier\": { \"offer\": \"rhcos-gen2\", \"publisher\": \"RedHat-gen2\", \"sku\": \"gen2\" }, \"osState\": \"Generalized\", \"osType\": \"Linux\" }, \"resources\": [ { \"apiVersion\": \"2021-10-01\", \"type\": \"versions\", \"name\": \"[variables('imageRelease')]\", \"location\": \"[variables('location')]\", \"dependsOn\": [ \"[variables('imageNameGen2')]\" ], \"properties\": { \"publishingProfile\": { \"storageAccountType\": \"Standard_LRS\", \"targetRegions\": [ { \"name\": \"[variables('location')]\", \"regionalReplicaCount\": \"1\" } ] }, \"storageProfile\": { \"osDiskImage\": { \"source\": { \"id\": \"[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccount'))]\", \"uri\": \"[parameters('vhdBlobURL')]\" } } } } } ] } ] } ] }",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters privateDNSZoneName=\"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Name of the private DNS zone\" } } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterPublicIpAddressName\" : \"[concat(parameters('baseName'), '-master-pip')]\", \"masterPublicIpAddressID\" : \"[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"masterLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"internalLoadBalancerID\" : \"[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]\", \"skuName\": \"Standard\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('masterPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('masterPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('masterLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"dependsOn\" : [ \"[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]\" ], \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"public-lb-ip-v4\", \"properties\" : { \"publicIPAddress\" : { \"id\" : \"[variables('masterPublicIpAddressID')]\" } } } ], \"backendAddressPools\" : [ { \"name\" : \"[variables('masterLoadBalancerName')]\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" :\"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip-v4')]\" }, \"backendAddressPool\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, \"protocol\" : \"Tcp\", \"loadDistribution\" : \"Default\", \"idleTimeoutInMinutes\" : 30, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"probe\" : { \"id\" : \"[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/loadBalancers\", \"name\" : \"[variables('internalLoadBalancerName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"[variables('skuName')]\" }, \"properties\" : { \"frontendIPConfigurations\" : [ { \"name\" : \"internal-lb-ip\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"privateIPAddressVersion\" : \"IPv4\" } } ], \"backendAddressPools\" : [ { \"name\" : \"internal-lb-backend\" } ], \"loadBalancingRules\" : [ { \"name\" : \"api-internal\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 6443, \"backendPort\" : 6443, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]\" } } }, { \"name\" : \"sint\", \"properties\" : { \"frontendIPConfiguration\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]\" }, \"frontendPort\" : 22623, \"backendPort\" : 22623, \"enableFloatingIP\" : false, \"idleTimeoutInMinutes\" : 30, \"protocol\" : \"Tcp\", \"enableTcpReset\" : false, \"loadDistribution\" : \"Default\", \"backendAddressPool\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]\" }, \"probe\" : { \"id\" : \"[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]\" } } } ], \"probes\" : [ { \"name\" : \"api-internal-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 6443, \"requestPath\": \"/readyz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } }, { \"name\" : \"sint-probe\", \"properties\" : { \"protocol\" : \"Https\", \"port\" : 22623, \"requestPath\": \"/healthz\", \"intervalInSeconds\" : 10, \"numberOfProbes\" : 3 } } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } }, { \"apiVersion\": \"2018-09-01\", \"type\": \"Microsoft.Network/privateDnsZones/A\", \"name\": \"[concat(parameters('privateDNSZoneName'), '/api-int')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]\" ], \"properties\": { \"ttl\": 60, \"aRecords\": [ { \"ipv4Address\": \"[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]\" } ] } } ] }",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"bootstrapIgnition\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Bootstrap ignition content for the bootstrap cluster\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"bootstrapVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the Bootstrap Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"vmName\" : \"[concat(parameters('baseName'), '-bootstrap')]\", \"nicName\" : \"[concat(variables('vmName'), '-nic')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"clusterNsgName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-nsg')]\", \"sshPublicIpAddressName\" : \"[concat(variables('vmName'), '-ssh-pip')]\" }, \"resources\" : [ { \"apiVersion\" : \"2018-12-01\", \"type\" : \"Microsoft.Network/publicIPAddresses\", \"name\" : \"[variables('sshPublicIpAddressName')]\", \"location\" : \"[variables('location')]\", \"sku\": { \"name\": \"Standard\" }, \"properties\" : { \"publicIPAllocationMethod\" : \"Static\", \"dnsSettings\" : { \"domainNameLabel\" : \"[variables('sshPublicIpAddressName')]\" } } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[variables('nicName')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" ], \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"publicIPAddress\": { \"id\": \"[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]\" }, \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmName')]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('bootstrapVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmName')]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('bootstrapIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmName'),'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : 100 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]\" } ] } } }, { \"apiVersion\" : \"2018-06-01\", \"type\": \"Microsoft.Network/networkSecurityGroups/securityRules\", \"name\" : \"[concat(variables('clusterNsgName'), '/bootstrap_ssh_in')]\", \"location\" : \"[variables('location')]\", \"dependsOn\" : [ \"[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]\" ], \"properties\": { \"protocol\" : \"Tcp\", \"sourcePortRange\" : \"*\", \"destinationPortRange\" : \"22\", \"sourceAddressPrefix\" : \"*\", \"destinationAddressPrefix\" : \"*\", \"access\" : \"Allow\", \"priority\" : 100, \"direction\" : \"Inbound\" } } ] }",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"masterIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the master nodes\" } }, \"numberOfMasters\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift masters to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"privateDNSZoneName\" : { \"type\" : \"string\", \"defaultValue\" : \"\", \"metadata\" : { \"description\" : \"unused\" } }, \"masterVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D8s_v3\", \"metadata\" : { \"description\" : \"The size of the Master Virtual Machines\" } }, \"diskSizeGB\" : { \"type\" : \"int\", \"defaultValue\" : 1024, \"metadata\" : { \"description\" : \"Size of the Master VM OS disk, in GB\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"masterSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-master-subnet')]\", \"masterSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]\", \"masterLoadBalancerName\" : \"[parameters('baseName')]\", \"internalLoadBalancerName\" : \"[concat(parameters('baseName'), '-internal-lb')]\", \"sshKeyPath\" : \"/home/core/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfMasters')]\", \"input\" : \"[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"copy\" : { \"name\" : \"nicCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('masterSubnetRef')]\" }, \"loadBalancerBackendAddressPools\" : [ { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/', variables('masterLoadBalancerName'))]\" }, { \"id\" : \"[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]\" } ] } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"copy\" : { \"name\" : \"vmCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('masterVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"core\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('masterIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()], '_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"caching\": \"ReadOnly\", \"writeAcceleratorEnabled\": false, \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\" : \"[parameters('diskSizeGB')]\" } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": false } } ] } } } ] }",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"{ \"USDschema\" : \"https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"parameters\" : { \"baseName\" : { \"type\" : \"string\", \"minLength\" : 1, \"metadata\" : { \"description\" : \"Base name to be used in resource names (usually the cluster's Infra ID)\" } }, \"vnetBaseName\": { \"type\": \"string\", \"defaultValue\": \"\", \"metadata\" : { \"description\" : \"The specific customer vnet's base name (optional)\" } }, \"workerIgnition\" : { \"type\" : \"string\", \"metadata\" : { \"description\" : \"Ignition content for the worker nodes\" } }, \"numberOfNodes\" : { \"type\" : \"int\", \"defaultValue\" : 3, \"minValue\" : 2, \"maxValue\" : 30, \"metadata\" : { \"description\" : \"Number of OpenShift compute nodes to deploy\" } }, \"sshKeyData\" : { \"type\" : \"securestring\", \"defaultValue\" : \"Unused\", \"metadata\" : { \"description\" : \"Unused\" } }, \"nodeVMSize\" : { \"type\" : \"string\", \"defaultValue\" : \"Standard_D4s_v3\", \"metadata\" : { \"description\" : \"The size of the each Node Virtual Machine\" } }, \"hyperVGen\": { \"type\": \"string\", \"metadata\": { \"description\": \"VM generation image to use\" }, \"defaultValue\": \"V2\", \"allowedValues\": [ \"V1\", \"V2\" ] } }, \"variables\" : { \"location\" : \"[resourceGroup().location]\", \"virtualNetworkName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-vnet')]\", \"virtualNetworkID\" : \"[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]\", \"nodeSubnetName\" : \"[concat(if(not(empty(parameters('vnetBaseName'))), parameters('vnetBaseName'), parameters('baseName')), '-worker-subnet')]\", \"nodeSubnetRef\" : \"[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]\", \"infraLoadBalancerName\" : \"[parameters('baseName')]\", \"sshKeyPath\" : \"/home/capi/.ssh/authorized_keys\", \"identityName\" : \"[concat(parameters('baseName'), '-identity')]\", \"galleryName\": \"[concat('gallery_', replace(parameters('baseName'), '-', '_'))]\", \"imageName\" : \"[concat(parameters('baseName'), if(equals(parameters('hyperVGen'), 'V2'), '-gen2', ''))]\", \"copy\" : [ { \"name\" : \"vmNames\", \"count\" : \"[parameters('numberOfNodes')]\", \"input\" : \"[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]\" } ] }, \"resources\" : [ { \"apiVersion\" : \"2019-05-01\", \"name\" : \"[concat('node', copyIndex())]\", \"type\" : \"Microsoft.Resources/deployments\", \"copy\" : { \"name\" : \"nodeCopy\", \"count\" : \"[length(variables('vmNames'))]\" }, \"properties\" : { \"mode\" : \"Incremental\", \"template\" : { \"USDschema\" : \"http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#\", \"contentVersion\" : \"1.0.0.0\", \"resources\" : [ { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Network/networkInterfaces\", \"name\" : \"[concat(variables('vmNames')[copyIndex()], '-nic')]\", \"location\" : \"[variables('location')]\", \"properties\" : { \"ipConfigurations\" : [ { \"name\" : \"pipConfig\", \"properties\" : { \"privateIPAllocationMethod\" : \"Dynamic\", \"subnet\" : { \"id\" : \"[variables('nodeSubnetRef')]\" } } } ] } }, { \"apiVersion\" : \"2018-06-01\", \"type\" : \"Microsoft.Compute/virtualMachines\", \"name\" : \"[variables('vmNames')[copyIndex()]]\", \"location\" : \"[variables('location')]\", \"tags\" : { \"kubernetes.io-cluster-ffranzupi\": \"owned\" }, \"identity\" : { \"type\" : \"userAssigned\", \"userAssignedIdentities\" : { \"[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]\" : {} } }, \"dependsOn\" : [ \"[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]\" ], \"properties\" : { \"hardwareProfile\" : { \"vmSize\" : \"[parameters('nodeVMSize')]\" }, \"osProfile\" : { \"computerName\" : \"[variables('vmNames')[copyIndex()]]\", \"adminUsername\" : \"capi\", \"adminPassword\" : \"NotActuallyApplied!\", \"customData\" : \"[parameters('workerIgnition')]\", \"linuxConfiguration\" : { \"disablePasswordAuthentication\" : false } }, \"storageProfile\" : { \"imageReference\": { \"id\": \"[resourceId('Microsoft.Compute/galleries/images', variables('galleryName'), variables('imageName'))]\" }, \"osDisk\" : { \"name\": \"[concat(variables('vmNames')[copyIndex()],'_OSDisk')]\", \"osType\" : \"Linux\", \"createOption\" : \"FromImage\", \"managedDisk\": { \"storageAccountType\": \"Premium_LRS\" }, \"diskSizeGB\": 128 } }, \"networkProfile\" : { \"networkInterfaces\" : [ { \"id\" : \"[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]\", \"properties\": { \"primary\": true } } ] } } } ] } } } ] }",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network private-dns record-set a create -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps --ttl 300",
"az network private-dns record-set a add-record -g USD{RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER}",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure/installing-azure-user-infra
|
A.9. x86_energy_perf_policy
|
A.9. x86_energy_perf_policy The x86_energy_perf_policy tool allows administrators to define the relative importance of performance and energy efficiency. It is provided by the kernel-tools package. To view the current policy, run the following command: To set a new policy, run the following command: Replace profile_name with one of the following profiles. performance The processor does not sacrifice performance for the sake of saving energy. This is the default value. normal The processor tolerates minor performance compromises for potentially significant energy savings. This is a reasonable saving for most servers and desktops. powersave The processor accepts potentially significant performance decreases in order to maximize energy efficiency. For further details of how to use x86_energy_perf_policy , see the man page:
|
[
"x86_energy_perf_policy -r",
"x86_energy_perf_policy profile_name",
"man x86_energy_perf_policy"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-x86_energy_perf_policy
|
Chapter 3. OpenID Connect authorization code flow mechanism for protecting web applications
|
Chapter 3. OpenID Connect authorization code flow mechanism for protecting web applications To protect your web applications, you can use the industry-standard OpenID Connect (OIDC) Authorization Code Flow mechanism provided by the Quarkus OIDC extension. 3.1. Overview of the OIDC authorization code flow mechanism The Quarkus OpenID Connect (OIDC) extension can protect application HTTP endpoints by using the OIDC Authorization Code Flow mechanism supported by OIDC-compliant authorization servers, such as Keycloak . The Authorization Code Flow mechanism authenticates users of your web application by redirecting them to an OIDC provider, such as Keycloak, to log in. After authentication, the OIDC provider redirects the user back to the application with an authorization code that confirms that authentication was successful. Then, the application exchanges this code with the OIDC provider for an ID token (which represents the authenticated user), an access token, and a refresh token to authorize the user's access to the application. The following diagram outlines the Authorization Code Flow mechanism in Quarkus. Figure 3.1. Authorization code flow mechanism in Quarkus The Quarkus user requests access to a Quarkus web-app application. The Quarkus web-app redirects the user to the authorization endpoint, that is, the OIDC provider for authentication. The OIDC provider redirects the user to a login and authentication prompt. At the prompt, the user enters their user credentials. The OIDC provider authenticates the user credentials entered and, if successful, issues an authorization code and redirects the user back to the Quarkus web-app with the code included as a query parameter. The Quarkus web-app exchanges this authorization code with the OIDC provider for ID, access, and refresh tokens. The authorization code flow is completed and the Quarkus web-app uses the tokens issued to access information about the user and grants the relevant role-based authorization to that user. The following tokens are issued: ID token: The Quarkus web-app application uses the user information in the ID token to enable the authenticated user to log in securely and to provide role-based access to the web application. Access token: The Quarkus web-app might use the access token to access the UserInfo API to get additional information about the authenticated user or to propagate it to another endpoint. Refresh token: (Optional) If the ID and access tokens expire, the Quarkus web-app can use the refresh token to get new ID and access tokens. See also the OIDC configuration properties reference guide. To learn about how you can protect web applications by using the OIDC Authorization Code Flow mechanism, see Protect a web application by using OIDC authorization code flow . If you want to protect service applications by using OIDC Bearer token authentication, see OIDC Bearer token authentication . For information about how to support multiple tenants, see Using OpenID Connect Multi-Tenancy . 3.2. Using the authorization code flow mechanism 3.2.1. Configuring access to the OIDC provider endpoint The OIDC web-app application requires URLs of the OIDC provider's authorization, token, JsonWebKey (JWK) set, and possibly the UserInfo , introspection and end-session (RP-initiated logout) endpoints. By convention, they are discovered by adding a /.well-known/openid-configuration path to the configured quarkus.oidc.auth-server-url . Alternatively, if the discovery endpoint is not available, or you prefer to reduce the discovery endpoint round-trip, you can disable endpoint discovery and configure relative path values. For example: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.discovery-enabled=false # Authorization endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/auth quarkus.oidc.authorization-path=/protocol/openid-connect/auth # Token endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/token quarkus.oidc.token-path=/protocol/openid-connect/token # JWK set endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/certs quarkus.oidc.jwks-path=/protocol/openid-connect/certs # UserInfo endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/userinfo quarkus.oidc.user-info-path=/protocol/openid-connect/userinfo # Token Introspection endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/token/introspect quarkus.oidc.introspection-path=/protocol/openid-connect/token/introspect # End-session endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/logout quarkus.oidc.end-session-path=/protocol/openid-connect/logout Some OIDC providers support metadata discovery but do not return all the endpoint URL values required for the authorization code flow to complete or to support application functions, for example, user logout. To work around this limitation, you can configure the missing endpoint URL values locally, as outlined in the following example: # Metadata is auto-discovered but it does not return an end-session endpoint URL quarkus.oidc.auth-server-url=http://localhost:8180/oidcprovider/account # Configure the end-session URL locally. # It can be an absolute or relative (to 'quarkus.oidc.auth-server-url') address quarkus.oidc.end-session-path=logout You can use this same configuration to override a discovered endpoint URL if that URL does not work for the local Quarkus endpoint and a more specific value is required. For example, a provider that supports both global and application-specific end-session endpoints returns a global end-session URL such as http://localhost:8180/oidcprovider/account/global-logout . This URL will log the user out of all the applications into which the user is currently logged in. However, if the requirement is for the current application to log the user out of a specific application only, you can override the global end-session URL, by setting the quarkus.oidc.end-session-path=logout parameter. 3.2.1.1. OIDC provider client authentication OIDC providers typically require applications to be identified and authenticated when they interact with the OIDC endpoints. Quarkus OIDC, specifically the quarkus.oidc.runtime.OidcProviderClient class, authenticates to the OIDC provider when the authorization code must be exchanged for the ID, access, and refresh tokens, or when the ID and access tokens must be refreshed or introspected. Typically, client id and client secrets are defined for a given application when it enlists to the OIDC provider. All OIDC client authentication options are supported. For example: Example of client_secret_basic : quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.secret=mysecret Or: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.client-secret.value=mysecret The following example shows the secret retrieved from a credentials provider : quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app # This is a key which will be used to retrieve a secret from the map of credentials returned from CredentialsProvider quarkus.oidc.credentials.client-secret.provider.key=mysecret-key # Set it only if more than one CredentialsProvider can be registered quarkus.oidc.credentials.client-secret.provider.name=oidc-credentials-provider Example of client_secret_post quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.client-secret.value=mysecret quarkus.oidc.credentials.client-secret.method=post Example of client_secret_jwt , where the signature algorithm is HS256: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.secret=AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow Example of client_secret_jwt , where the secret is retrieved from a credentials provider : quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app # This is a key which will be used to retrieve a secret from the map of credentials returned from CredentialsProvider quarkus.oidc.credentials.jwt.secret-provider.key=mysecret-key # Set it only if more than one CredentialsProvider can be registered quarkus.oidc.credentials.jwt.secret-provider.name=oidc-credentials-provider Example of private_key_jwt with the PEM key file, and where the signature algorithm is RS256: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key-file=privateKey.pem Example of private_key_jwt with the keystore file, where the signature algorithm is RS256: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key-store-file=keystore.jks quarkus.oidc.credentials.jwt.key-store-password=mypassword quarkus.oidc.credentials.jwt.key-password=mykeypassword # Private key alias inside the keystore quarkus.oidc.credentials.jwt.key-id=mykeyAlias Using client_secret_jwt or private_key_jwt authentication methods ensures that a client secret does not get sent to the OIDC provider, therefore avoiding the risk of a secret being intercepted by a 'man-in-the-middle' attack. 3.2.1.1.1. Additional JWT authentication options If client_secret_jwt , private_key_jwt , or an Apple post_jwt authentication methods are used, then you can customize the JWT signature algorithm, key identifier, audience, subject and issuer. For example: # private_key_jwt client authentication quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key-file=privateKey.pem # This is a token key identifier 'kid' header - set it if your OIDC provider requires it: # Note if the key is represented in a JSON Web Key (JWK) format with a `kid` property, then # using 'quarkus.oidc.credentials.jwt.token-key-id' is not necessary. quarkus.oidc.credentials.jwt.token-key-id=mykey # Use RS512 signature algorithm instead of the default RS256 quarkus.oidc.credentials.jwt.signature-algorithm=RS512 # The token endpoint URL is the default audience value, use the base address URL instead: quarkus.oidc.credentials.jwt.audience=USD{quarkus.oidc-client.auth-server-url} # custom subject instead of the client id: quarkus.oidc.credentials.jwt.subject=custom-subject # custom issuer instead of the client id: quarkus.oidc.credentials.jwt.issuer=custom-issuer 3.2.1.1.2. Apple POST JWT The Apple OIDC provider uses a client_secret_post method whereby a secret is a JWT produced with a private_key_jwt authentication method, but with the Apple account-specific issuer and subject claims. In Quarkus Security, quarkus-oidc supports a non-standard client_secret_post_jwt authentication method, which you can configure as follows: # Apple provider configuration sets a 'client_secret_post_jwt' authentication method quarkus.oidc.provider=apple quarkus.oidc.client-id=USD{apple.client-id} quarkus.oidc.credentials.jwt.key-file=ecPrivateKey.pem quarkus.oidc.credentials.jwt.token-key-id=USD{apple.key-id} # Apple provider configuration sets ES256 signature algorithm quarkus.oidc.credentials.jwt.subject=USD{apple.subject} quarkus.oidc.credentials.jwt.issuer=USD{apple.issuer} 3.2.1.1.3. mutual TLS (mTLS) Some OIDC providers might require that a client is authenticated as part of the mutual TLS authentication process. The following example shows how you can configure quarkus-oidc to support mTLS : quarkus.oidc.tls.verification=certificate-validation # Keystore configuration quarkus.oidc.tls.key-store-file=client-keystore.jks quarkus.oidc.tls.key-store-password=USD{key-store-password} # Add more keystore properties if needed: #quarkus.oidc.tls.key-store-alias=keyAlias #quarkus.oidc.tls.key-store-alias-password=keyAliasPassword # Truststore configuration quarkus.oidc.tls.trust-store-file=client-truststore.jks quarkus.oidc.tls.trust-store-password=USD{trust-store-password} # Add more truststore properties if needed: #quarkus.oidc.tls.trust-store-alias=certAlias 3.2.1.1.4. POST query Some providers, such as the Strava OAuth2 provider , require client credentials be posted as HTTP POST query parameters: quarkus.oidc.provider=strava quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.client-secret.value=mysecret quarkus.oidc.credentials.client-secret.method=query 3.2.1.2. Introspection endpoint authentication Some OIDC providers require authentication to its introspection endpoint by using Basic authentication and with credentials that are different from the client_id and client_secret . If you have previously configured security authentication to support either the client_secret_basic or client_secret_post client authentication methods as described in the OIDC provider client authentication section, you might need to apply the additional configuration as follows. If the tokens have to be introspected and the introspection endpoint-specific authentication mechanism is required, you can configure quarkus-oidc as follows: quarkus.oidc.introspection-credentials.name=introspection-user-name quarkus.oidc.introspection-credentials.secret=introspection-user-secret 3.2.1.3. OIDC request filters You can filter OIDC requests made by Quarkus to the OIDC provider by registering one or more OidcRequestFilter implementations, which can update or add new request headers and can also log requests. For example: package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.common.OidcRequestContextProperties; import io.quarkus.oidc.common.OidcRequestFilter; import io.vertx.mutiny.core.buffer.Buffer; import io.vertx.mutiny.ext.web.client.HttpRequest; @ApplicationScoped @Unremovable public class OidcTokenRequestCustomizer implements OidcRequestFilter { @Override public void filter(HttpRequest<Buffer> request, Buffer buffer, OidcRequestContextProperties contextProps) { OidcConfigurationMetadata metadata = contextProps.get(OidcConfigurationMetadata.class.getName()); 1 // Metadata URI is absolute, request URI value is relative if (metadata.getTokenUri().endsWith(request.uri())) { 2 request.putHeader("TokenGrantDigest", calculateDigest(buffer.toString())); } } private String calculateDigest(String bodyString) { // Apply the required digest algorithm to the body string } } 1 Get OidcConfigurationMetadata , which contains all supported OIDC endpoint addresses. 2 Use OidcConfigurationMetadata to filter requests to the OIDC token endpoint only. Alternatively, you can use OidcRequestFilter.Endpoint enum to apply this filter to the token endpoint requests only: import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.common.OidcEndpoint; import io.quarkus.oidc.common.OidcEndpoint.Type; import io.quarkus.oidc.common.OidcRequestContextProperties; import io.quarkus.oidc.common.OidcRequestFilter; import io.vertx.mutiny.core.buffer.Buffer; import io.vertx.mutiny.ext.web.client.HttpRequest; @ApplicationScoped @Unremovable @OidcEndpoint(value = Type.DISCOVERY) 1 public class OidcDiscoveryRequestCustomizer implements OidcRequestFilter { @Override public void filter(HttpRequest<Buffer> request, Buffer buffer, OidcRequestContextProperties contextProps) { request.putHeader("Discovery", "OK"); } } 1 Restrict this filter to requests targeting the OIDC discovery endpoint only. 3.2.1.4. Redirecting to and from the OIDC provider When a user is redirected to the OIDC provider to authenticate, the redirect URL includes a redirect_uri query parameter, which indicates to the provider where the user has to be redirected to when the authentication is complete. In our case, this is the Quarkus application. Quarkus sets this parameter to the current application request URL by default. For example, if a user is trying to access a Quarkus service endpoint at http://localhost:8080/service/1 , then the redirect_uri parameter is set to http://localhost:8080/service/1 . Similarly, if the request URL is http://localhost:8080/service/2 , then the redirect_uri parameter is set to http://localhost:8080/service/2 . Some OIDC providers require the redirect_uri to have the same value for a given application, for example, http://localhost:8080/service/callback , for all the redirect URLs. In such cases, a quarkus.oidc.authentication.redirect-path property has to be set. For example, quarkus.oidc.authentication.redirect-path=/service/callback , and Quarkus will set the redirect_uri parameter to an absolute URL such as http://localhost:8080/service/callback , which will be the same regardless of the current request URL. If quarkus.oidc.authentication.redirect-path is set, but you need the original request URL to be restored after the user is redirected back to a unique callback URL, for example, http://localhost:8080/service/callback , set quarkus.oidc.authentication.restore-path-after-redirect property to true . This will restore the request URL such as http://localhost:8080/service/1 . 3.2.1.5. Customizing authentication requests By default, only the response_type (set to code ), scope (set to openid ), client_id , redirect_uri , and state properties are passed as HTTP query parameters to the OIDC provider's authorization endpoint when the user is redirected to it to authenticate. You can add more properties to it with quarkus.oidc.authentication.extra-params . For example, some OIDC providers might choose to return the authorization code as part of the redirect URI's fragment, which would break the authentication process. The following example shows how you can work around this issue: quarkus.oidc.authentication.extra-params.response_mode=query 3.2.1.6. Customizing the authentication error response When the user is redirected to the OIDC authorization endpoint to authenticate and, if necessary, authorize the Quarkus application, this redirect request might fail, for example, when an invalid scope is included in the redirect URI. In such cases, the provider redirects the user back to Quarkus with error and error_description parameters instead of the expected code parameter. For example, this can happen when an invalid scope or other invalid parameters are included in the redirect to the provider. In such cases, an HTTP 401 error is returned by default. However, you can request that a custom public error endpoint be called to return a more user-friendly HTML error page. To do this, set the quarkus.oidc.authentication.error-path property, as shown in the following example: quarkus.oidc.authentication.error-path=/error Ensure that the property starts with a forward slash (/) character and the path is relative to the base URI of the current endpoint. For example, if it is set to '/error' and the current request URI is https://localhost:8080/callback?error=invalid_scope , then a final redirect is made to https://localhost:8080/error?error=invalid_scope . Important To prevent the user from being redirected to this page to be re-authenticated, ensure that this error endpoint is a public resource. 3.2.2. Accessing authorization data You can access information about authorization in different ways. 3.2.2.1. Accessing ID and access tokens The OIDC code authentication mechanism acquires three tokens during the authorization code flow: ID token , access token, and refresh token. The ID token is always a JWT token and represents a user authentication with the JWT claims. You can use this to get the issuing OIDC endpoint, the username, and other information called claims . You can access ID token claims by injecting JsonWebToken with an IdToken qualifier: import jakarta.inject.Inject; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.IdToken; import io.quarkus.security.Authenticated; @Path("/web-app") @Authenticated public class ProtectedResource { @Inject @IdToken JsonWebToken idToken; @GET public String getUserName() { return idToken.getName(); } } The OIDC web-app application usually uses the access token to access other endpoints on behalf of the currently logged-in user. You can access the raw access token as follows: import jakarta.inject.Inject; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.AccessTokenCredential; import io.quarkus.security.Authenticated; @Path("/web-app") @Authenticated public class ProtectedResource { @Inject JsonWebToken accessToken; // or // @Inject // AccessTokenCredential accessTokenCredential; @GET public String getReservationOnBehalfOfUser() { String rawAccessToken = accessToken.getRawToken(); //or //String rawAccessToken = accessTokenCredential.getToken(); // Use the raw access token to access a remote endpoint. // For example, use RestClient to set this token as a `Bearer` scheme value of the HTTP `Authorization` header: // `Authorization: Bearer rawAccessToken`. return getReservationfromRemoteEndpoint(rawAccesstoken); } } Note AccessTokenCredential is used if the access token issued to the Quarkus web-app application is opaque (binary) and cannot be parsed to a JsonWebToken or if the inner content is necessary for the application. Injection of the JsonWebToken and AccessTokenCredential is supported in both @RequestScoped and @ApplicationScoped contexts. Quarkus OIDC uses the refresh token to refresh the current ID and access tokens as part of its session management process. 3.2.2.2. User info If the ID token does not provide enough information about the currently authenticated user, you can get more information from the UserInfo endpoint. Set the quarkus.oidc.authentication.user-info-required=true property to request a UserInfo JSON object from the OIDC UserInfo endpoint. A request is sent to the OIDC provider UserInfo endpoint by using the access token returned with the authorization code grant response, and an io.quarkus.oidc.UserInfo (a simple jakarta.json.JsonObject wrapper) object is created. io.quarkus.oidc.UserInfo can be injected or accessed as a SecurityIdentity userinfo attribute. 3.2.2.3. Accessing the OIDC configuration information The current tenant's discovered OpenID Connect configuration metadata is represented by io.quarkus.oidc.OidcConfigurationMetadata and can be injected or accessed as a SecurityIdentity configuration-metadata attribute. The default tenant's OidcConfigurationMetadata is injected if the endpoint is public. 3.2.2.4. Mapping token claims and SecurityIdentity roles The way the roles are mapped to the SecurityIdentity roles from the verified tokens is identical to how it is done for the Bearer tokens . The only difference is that ID token is used as a source of the roles by default. Note If you use Keycloak, set a microprofile-jwt client scope for the ID token to contain a groups claim. For more information, see the Keycloak server administration guide . However, depending on your OIDC provider, roles might be stored in the access token or the user info. If the access token contains the roles and this access token is not meant to be propagated to the downstream endpoints, then set quarkus.oidc.roles.source=accesstoken . If UserInfo is the source of the roles, then set quarkus.oidc.roles.source=userinfo , and if needed, quarkus.oidc.roles.role-claim-path . Additionally, you can also use a custom SecurityIdentityAugmentor to add the roles. For more information, see SecurityIdentity customization . You can also map SecurityIdentity roles created from token claims to deployment-specific roles with the HTTP Security policy . 3.2.3. Ensuring validity of tokens and authentication data A core part of the authentication process is ensuring the chain of trust and validity of the information. This is done by ensuring tokens can be trusted. 3.2.3.1. Token verification and introspection The verification process of OIDC authorization code flow tokens follows the Bearer token authentication token verification and introspection logic. For more information, see the Token verification and introspection section of the "Quarkus OpenID Connect (OIDC) Bearer token authentication" guide. Note With Quarkus web-app applications, only the IdToken is verified by default because the access token is not used to access the current Quarkus web-app endpoint and is intended to be propagated to the services expecting this access token. If you expect the access token to contain the roles required to access the current Quarkus endpoint ( quarkus.oidc.roles.source=accesstoken ), then it will also be verified. 3.2.3.2. Token introspection and UserInfo cache Code flow access tokens are not introspected unless they are expected to be the source of roles. However, they will be used to get UserInfo . There will be one or two remote calls with the code flow access token if the token introspection, UserInfo , or both are required. For more information about using the default token cache or registering a custom cache implementation, see Token introspection and UserInfo cache . 3.2.3.3. JSON web token claim verification For information about the claim verification, including the iss (issuer) claim, see the JSON Web Token claim verification section. It applies to ID tokens and also to access tokens in a JWT format, if the web-app application has requested the access token verification. 3.2.3.4. Further security with Proof Key for Code Exchange (PKCE) Proof Key for Code Exchange (PKCE) minimizes the risk of authorization code interception. While PKCE is of primary importance to public OIDC clients, such as SPA scripts running in a browser, it can also provide extra protection to Quarkus OIDC web-app applications. With PKCE, Quarkus OIDC web-app applications act as confidential OIDC clients that can securely store the client secret and use it to exchange the code for the tokens. You can enable PKCE for your OIDC web-app endpoint with a quarkus.oidc.authentication.pkce-required property and a 32-character secret that is required to encrypt the PKCE code verifier in the state cookie, as shown in the following example: quarkus.oidc.authentication.pkce-required=true quarkus.oidc.authentication.state-secret=eUk1p7UB3nFiXZGUXi0uph1Y9p34YhBU If you already have a 32-character client secret, you do not need to set the quarkus.oidc.authentication.pkce-secret property unless you prefer to use a different secret key. This secret will be auto-generated if it is not configured and if the fallback to the client secret is not possible in cases where the client secret is less than 16 characters long. The secret key is required to encrypt a randomly generated PKCE code_verifier while the user is redirected with the code_challenge query parameter to an OIDC provider to authenticate. The code_verifier is decrypted when the user is redirected back to Quarkus and sent to the token endpoint alongside the code , client secret, and other parameters to complete the code exchange. The provider will fail the code exchange if a SHA256 digest of the code_verifier does not match the code_challenge that was provided during the authentication request. 3.2.4. Handling and controlling the lifetime of authentication Another important requirement for authentication is to ensure that the data the session is based on is up-to-date without requiring the user to authenticate for every single request. There are also situations where a logout event is explicitly requested. Use the following key points to find the right balance for securing your Quarkus applications: 3.2.4.1. Cookies The OIDC adapter uses cookies to keep the session, code flow, and post-logout state. This state is a key element controlling the lifetime of authentication data. Use the quarkus.oidc.authentication.cookie-path property to ensure that the same cookie is visible when you access protected resources with overlapping or different roots. For example: /index.html and /web-app/service /web-app/service1 and /web-app/service2 /web-app1/service and /web-app2/service By default, quarkus.oidc.authentication.cookie-path is set to / but you can change this to a more specific path if required, for example, /web-app . To set the cookie path dynamically, configure the quarkus.oidc.authentication.cookie-path-header property. Set the quarkus.oidc.authentication.cookie-path-header property. For example, to set the cookie path dynamically by using the value of the`X-Forwarded-Prefix` HTTP header, configure the property to quarkus.oidc.authentication.cookie-path-header=X-Forwarded-Prefix . If quarkus.oidc.authentication.cookie-path-header is set but no configured HTTP header is available in the current request, then the quarkus.oidc.authentication.cookie-path will be checked. If your application is deployed across multiple domains, set the quarkus.oidc.authentication.cookie-domain property so that the session cookie is visible to all protected Quarkus services. For example, if you have Quarkus services deployed on the following two domains, then you must set the quarkus.oidc.authentication.cookie-domain property to company.net : https://whatever.wherever.company.net/ https://another.address.company.net/ 3.2.4.2. Session cookie and default TokenStateManager OIDC CodeAuthenticationMechanism uses the default io.quarkus.oidc.TokenStateManager interface implementation to keep the ID, access, and refresh tokens returned in the authorization code or refresh grant responses in an encrypted session cookie. It makes Quarkus OIDC endpoints completely stateless and it is recommended to follow this strategy to achieve the best scalability results. See the Session cookie and custom TokenStateManager section for alternative methods of token storage. This is ideal for those seeking customized solutions for token state management, especially when standard server-side storage does not meet your specific requirements. You can configure the default TokenStateManager to avoid saving an access token in the session cookie and to only keep ID and refresh tokens or a single ID token only. An access token is only required if the endpoint needs to do the following actions: Retrieve UserInfo Access the downstream service with this access token Use the roles associated with the access token, which are checked by default In such cases, use the quarkus.oidc.token-state-manager.strategy property to configure the token state strategy as follows: To... Set the property to ... Keep the ID and refresh tokens only quarkus.oidc.token-state-manager.strategy=id-refresh-tokens Keep the ID token only quarkus.oidc.token-state-manager.strategy=id-token If your chosen session cookie strategy combines tokens and generates a large session cookie value that is greater than 4KB, some browsers might not be able to handle such cookie sizes. This can occur when the ID, access, and refresh tokens are JWT tokens and the selected strategy is keep-all-tokens or with ID and refresh tokens when the strategy is id-refresh-token . To work around this issue, you can set quarkus.oidc.token-state-manager.split-tokens=true to create a unique session token for each token. The default TokenStateManager encrypts the tokens before storing them in the session cookie. The following example shows how you configure it to split the tokens and encrypt them: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.token-state-manager.split-tokens=true quarkus.oidc.token-state-manager.encryption-secret=eUk1p7UB3nFiXZGUXi0uph1Y9p34YhBU The token encryption secret must be at least 32 characters long. If this key is not configured, then either quarkus.oidc.credentials.secret or quarkus.oidc.credentials.jwt.secret will be hashed to create an encryption key. Configure the quarkus.oidc.token-state-manager.encryption-secret property if Quarkus authenticates to the OIDC provider by using one of the following authentication methods: mTLS private_key_jwt , where a private RSA or EC key is used to sign a JWT token Otherwise, a random key is generated, which can be problematic if the Quarkus application is running in the cloud with multiple pods managing the requests. You can disable token encryption in the session cookie by setting quarkus.oidc.token-state-manager.encryption-required=false . 3.2.4.3. Session cookie and custom TokenStateManager If you want to customize the way the tokens are associated with the session cookie, register a custom io.quarkus.oidc.TokenStateManager implementation as an @ApplicationScoped CDI bean. For example, you might want to keep the tokens in a cache cluster and have only a key stored in a session cookie. Note that this approach might introduce some challenges if you need to make the tokens available across multiple microservices nodes. Here is a simple example: package io.quarkus.oidc.test; import jakarta.annotation.Priority; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.inject.Alternative; import jakarta.inject.Inject; import io.quarkus.oidc.AuthorizationCodeTokens; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.TokenStateManager; import io.quarkus.oidc.runtime.DefaultTokenStateManager; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped @Alternative @Priority(1) public class CustomTokenStateManager implements TokenStateManager { @Inject DefaultTokenStateManager tokenStateManager; @Override public Uni<String> createTokenState(RoutingContext routingContext, OidcTenantConfig oidcConfig, AuthorizationCodeTokens sessionContent, TokenStateManager.CreateTokenStateRequestContext requestContext) { return tokenStateManager.createTokenState(routingContext, oidcConfig, sessionContent, requestContext) .map(t -> (t + "|custom")); } @Override public Uni<AuthorizationCodeTokens> getTokens(RoutingContext routingContext, OidcTenantConfig oidcConfig, String tokenState, TokenStateManager.GetTokensRequestContext requestContext) { if (!tokenState.endsWith("|custom")) { throw new IllegalStateException(); } String defaultState = tokenState.substring(0, tokenState.length() - 7); return tokenStateManager.getTokens(routingContext, oidcConfig, defaultState, requestContext); } @Override public Uni<Void> deleteTokens(RoutingContext routingContext, OidcTenantConfig oidcConfig, String tokenState, TokenStateManager.DeleteTokensRequestContext requestContext) { if (!tokenState.endsWith("|custom")) { throw new IllegalStateException(); } String defaultState = tokenState.substring(0, tokenState.length() - 7); return tokenStateManager.deleteTokens(routingContext, oidcConfig, defaultState, requestContext); } } For information about the default TokenStateManager storing tokens in an encrypted session cookie, see Session cookie and default TokenStateManager . 3.2.4.4. Logout and expiration There are two main ways for the authentication information to expire: the tokens expired and were not renewed or an explicit logout operation was triggered. Let's start with explicit logout operations. 3.2.4.4.1. User-initiated logout Users can request a logout by sending a request to the Quarkus endpoint logout path set with a quarkus.oidc.logout.path property. For example, if the endpoint address is https://application.com/webapp and the quarkus.oidc.logout.path is set to /logout , then the logout request must be sent to https://application.com/webapp/logout . This logout request starts an RP-initiated logout . The user will be redirected to the OIDC provider to log out, where they can be asked to confirm the logout is indeed intended. The user will be returned to the endpoint post-logout page once the logout has been completed and if the quarkus.oidc.logout.post-logout-path property is set. For example, if the endpoint address is https://application.com/webapp and the quarkus.oidc.logout.post-logout-path is set to /signin , then the user will be returned to https://application.com/webapp/signin . Note, this URI must be registered as a valid post_logout_redirect_uri in the OIDC provider. If the quarkus.oidc.logout.post-logout-path is set, then a q_post_logout cookie will be created and a matching state query parameter will be added to the logout redirect URI and the OIDC provider will return this state once the logout has been completed. It is recommended for the Quarkus web-app applications to check that a state query parameter matches the value of the q_post_logout cookie, which can be done, for example, in a Jakarta REST filter. Note that a cookie name varies when using OpenID Connect Multi-Tenancy . For example, it will be named q_post_logout_tenant_1 for a tenant with a tenant_1 ID, and so on. Here is an example of how to configure a Quarkus application to initiate a logout flow: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.logout.path=/logout # Logged-out users should be returned to the /welcome.html site which will offer an option to re-login: quarkus.oidc.logout.post-logout-path=/welcome.html # Only the authenticated users can initiate a logout: quarkus.http.auth.permission.authenticated.paths=/logout quarkus.http.auth.permission.authenticated.policy=authenticated # All users can see the Welcome page: quarkus.http.auth.permission.public.paths=/welcome.html quarkus.http.auth.permission.public.policy=permit You might also want to set quarkus.oidc.authentication.cookie-path to a path value common to all the application resources, which is / in this example. For more information, see the Cookies section. Note Some OIDC providers do not support a RP-initiated logout specification and do not return an OpenID Connect well-known end_session_endpoint metadata property. However, this is not a problem for Quarkus because the specific logout mechanisms of such OIDC providers only differ in how the logout URL query parameters are named. According to the RP-initiated logout specification, the quarkus.oidc.logout.post-logout-path property is represented as a post_logout_redirect_uri query parameter, which is not recognized by the providers that do not support this specification. You can use quarkus.oidc.logout.post-logout-url-param to work around this issue. You can also request more logout query parameters added with quarkus.oidc.logout.extra-params . For example, here is how you can support a logout with Auth0 : quarkus.oidc.auth-server-url=https://dev-xxx.us.auth0.com quarkus.oidc.client-id=redacted quarkus.oidc.credentials.secret=redacted quarkus.oidc.application-type=web-app quarkus.oidc.tenant-logout.logout.path=/logout quarkus.oidc.tenant-logout.logout.post-logout-path=/welcome.html # Auth0 does not return the `end_session_endpoint` metadata property. Instead, you must configure it: quarkus.oidc.end-session-path=v2/logout # Auth0 will not recognize the 'post_logout_redirect_uri' query parameter so ensure it is named as 'returnTo': quarkus.oidc.logout.post-logout-uri-param=returnTo # Set more properties if needed. # For example, if 'client_id' is provided, then a valid logout URI should be set as the Auth0 Application property, without it - as Auth0 Tenant property: quarkus.oidc.logout.extra-params.client_id=USD{quarkus.oidc.client-id} 3.2.4.4.2. Back-channel logout The OIDC provider can force the logout of all applications by using the authentication data. This is known as back-channel logout. In this case, the OIDC will call a specific URL from each application to trigger that logout. OIDC providers use Back-channel logout to log out the current user from all the applications into which this user is currently logged in, bypassing the user agent. You can configure Quarkus to support Back-channel logout as follows: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.logout.backchannel.path=/back-channel-logout The absolute back-channel logout URL is calculated by adding quarkus.oidc.back-channel-logout.path to the current endpoint URL, for example, http://localhost:8080/back-channel-logout . You will need to configure this URL in the admin console of your OIDC provider. You will also need to configure a token age property for the logout token verification to succeed if your OIDC provider does not set an expiry claim in the current logout token. For example, set quarkus.oidc.token.age=10S to ensure that no more than 10 seconds elapse since the logout token's iat (issued at) time. 3.2.4.4.3. Front-channel logout You can use Front-channel logout to log out the current user directly from the user agent, for example, its browser. It is similar to Back-channel logout but the logout steps are executed by the user agent, such as the browser, and not in the background by the OIDC provider. This option is rarely used. You can configure Quarkus to support Front-channel logout as follows: quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.logout.frontchannel.path=/front-channel-logout This path will be compared to the current request's path, and the user will be logged out if these paths match. 3.2.4.4.4. Local logout User-initiated logout will log the user out of the OIDC provider. If it is used as single sign-on, it might not be what you require. If, for example, your OIDC provider is Google, you will be logged out from Google and its services. Instead, the user might just want to log out of that specific application. Another use case might be when the OIDC provider does not have a logout endpoint. By using OidcSession , you can support a local logout, which means that only the local session cookie is cleared, as shown in the following example: import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.quarkus.oidc.OidcSession; @Path("/service") public class ServiceResource { @Inject OidcSession oidcSession; @GET @Path("logout") public String logout() { oidcSession.logout().await().indefinitely(); return "You are logged out". } 3.2.4.4.4.1. Using OidcSession for local logout io.quarkus.oidc.OidcSession is a wrapper around the current IdToken , which can help to perform a Local logout , retrieve the current session's tenant identifier, and check when the session will expire. More useful methods will be added to it over time. 3.2.4.5. Session management By default, logout is based on the expiration time of the ID token issued by the OIDC provider. When the ID token expires, the current user session at the Quarkus endpoint is invalidated, and the user is redirected to the OIDC provider again to authenticate. If the session at the OIDC provider is still active, users are automatically re-authenticated without needing to provide their credentials again. The current user session can be automatically extended by enabling the quarkus.oidc.token.refresh-expired property. If set to true , when the current ID token expires, a refresh token grant will be used to refresh the ID token as well as access and refresh tokens. Tip If you have a single page application for service applications where your OIDC provider script such as keycloak.js is managing an authorization code flow, then that script will also control the SPA authentication session lifespan. If you work with a Quarkus OIDC web-app application, then the Quarkus OIDC code authentication mechanism manages the user session lifespan. To use the refresh token, you should carefully configure the session cookie age. The session age should be longer than the ID token lifespan and close to or equal to the refresh token lifespan. You calculate the session age by adding the lifespan value of the current ID token and the values of the quarkus.oidc.authentication.session-age-extension and quarkus.oidc.token.lifespan-grace properties. Tip You use only the quarkus.oidc.authentication.session-age-extension property to significantly extend the session lifespan, if required. You use the quarkus.oidc.token.lifespan-grace property only to consider some small clock skews. When the current authenticated user returns to the protected Quarkus endpoint and the ID token associated with the session cookie has expired, then, by default, the user is automatically redirected to the OIDC Authorization endpoint to re-authenticate. The OIDC provider might challenge the user again if the session between the user and this OIDC provider is still active, which might happen if the session is configured to last longer than the ID token. If the quarkus.oidc.token.refresh-expired is set to true , then the expired ID token (and the access token) is refreshed by using the refresh token returned with the initial authorization code grant response. This refresh token might also be recycled (refreshed) itself as part of this process. As a result, the new session cookie is created, and the session is extended. Note In instances where the user is not very active, you can use the quarkus.oidc.authentication.session-age-extension property to help handle expired ID tokens. If the ID token expires, the session cookie might not be returned to the Quarkus endpoint during the user request as the cookie lifespan would have elapsed. Quarkus assumes that this request is the first authentication request. Set quarkus.oidc.authentication.session-age-extension to be reasonably long for your barely-active users and in accordance with your security policies. You can go one step further and proactively refresh ID tokens or access tokens that are about to expire. Set quarkus.oidc.token.refresh-token-time-skew to the value you want to anticipate the refresh. If, during the current user request, it is calculated that the current ID token will expire within this quarkus.oidc.token.refresh-token-time-skew , then it is refreshed, and the new session cookie is created. This property should be set to a value that is less than the ID token lifespan; the closer it is to this lifespan value, the more often the ID token is refreshed. You can further optimize this process by having a simple JavaScript function ping your Quarkus endpoint periodically to emulate the user activity, which minimizes the time frame during which the user might have to be re-authenticated. Note You cannot extend the user session indefinitely. The returning user with the expired ID token will have to re-authenticate at the OIDC provider endpoint once the refresh token has expired. 3.2.5. Integration with GitHub and non-OIDC OAuth2 providers Some well-known providers such as GitHub or LinkedIn are not OpenID Connect providers, but OAuth2 providers that support the authorization code flow . For example, GitHub OAuth2 and LinkedIn OAuth2 . Remember, OIDC is built on top of OAuth2. The main difference between OIDC and OAuth2 providers is that OIDC providers return an ID Token that represents a user authentication, in addition to the standard authorization code flow access and refresh tokens returned by OAuth2 providers. OAuth2 providers such as GitHub do not return IdToken , and the user authentication is implicit and indirectly represented by the access token. This access token represents an authenticated user authorizing the current Quarkus web-app application to access some data on behalf of the authenticated user. For OIDC, you validate the ID token as proof of authentication validity whereas in the case of OAuth2, you validate the access token. This is done by subsequently calling an endpoint that requires the access token and that typically returns user information. This approach is similar to the OIDC UserInfo approach, with UserInfo fetched by Quarkus OIDC on your behalf. For example, when working with GitHub, the Quarkus endpoint can acquire an access token, which allows the Quarkus endpoint to request a GitHub profile for the current user. To support the integration with such OAuth2 servers, quarkus-oidc needs to be configured a bit differently to allow the authorization code flow responses without IdToken : quarkus.oidc.authentication.id-token-required=false . Note Even though you configure the extension to support the authorization code flows without IdToken , an internal IdToken is generated to standardize the way quarkus-oidc operates. You use an IdToken to support the authentication session and to avoid redirecting the user to the provider, such as GitHub, on every request. In this case, the session lifespan is set to 5 minutes, which you can extend further as described in the session management section. This simplifies how you handle an application that supports multiple OIDC providers. The step is to ensure that the returned access token can be useful and is valid to the current Quarkus endpoint. The first way is to call the OAuth2 provider introspection endpoint by configuring quarkus.oidc.introspection-path , if the provider offers such an endpoint. In this case, you can use the access token as a source of roles using quarkus.oidc.roles.source=accesstoken . If no introspection endpoint is present, you can attempt instead to request UserInfo from the provider as it will at least validate the access token. To do so, specify quarkus.oidc.token.verify-access-token-with-user-info=true . You also need to set the quarkus.oidc.user-info-path property to a URL endpoint that fetches the user info (or to an endpoint protected by the access token). For GitHub, since it does not have an introspection endpoint, requesting the UserInfo is required. Note Requiring UserInfo involves making a remote call on every request. Therefore, you might want to consider caching UserInfo data. For more information, see the Token Introspection and UserInfo cache section of the "OpenID Connect (OIDC) Bearer token authentication" guide. Alternatively, you might want to request that UserInfo is embedded into the internal generated IdToken with the quarkus.oidc.cache-user-info-in-idtoken=true property. The advantage of this approach is that, by default, no cached UserInfo state will be kept with the endpoint - instead it will be stored in a session cookie. You might also want to consider encrypting IdToken in this case if UserInfo contains sensitive data. For more information, see Encrypt tokens with TokenStateManager . OAuth2 servers might not support a well-known configuration endpoint. In this case, you must disable the discovery and configure the authorization, token, and introspection and UserInfo endpoint paths manually. For well-known OIDC or OAuth2 providers, such as Apple, Facebook, GitHub, Google, Microsoft, Spotify, and Twitter, Quarkus can help significantly simplify your application's configuration with the quarkus.oidc.provider property. Here is how you can integrate quarkus-oidc with GitHub after you have created a GitHub OAuth application . Configure your Quarkus endpoint like this: quarkus.oidc.provider=github quarkus.oidc.client-id=github_app_clientid quarkus.oidc.credentials.secret=github_app_clientsecret # user:email scope is requested by default, use 'quarkus.oidc.authentication.scopes' to request different scopes such as `read:user`. # See https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps for more information. # Consider enabling UserInfo Cache # quarkus.oidc.token-cache.max-size=1000 # quarkus.oidc.token-cache.time-to-live=5M # # Or having UserInfo cached inside IdToken itself # quarkus.oidc.cache-user-info-in-idtoken=true For more information about configuring other well-known providers, see OpenID Connect providers . This is all that is needed for an endpoint like this one to return the currently-authenticated user's profile with GET http://localhost:8080/github/userinfo and access it as the individual UserInfo properties: import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.oidc.UserInfo; import io.quarkus.security.Authenticated; @Path("/github") @Authenticated public class TokenResource { @Inject UserInfo userInfo; @GET @Path("/userinfo") @Produces("application/json") public String getUserInfo() { return userInfo.getUserInfoString(); } } If you support more than one social provider with the help of OpenID Connect Multi-Tenancy , for example, Google, which is an OIDC provider that returns IdToken , and GitHub, which is an OAuth2 provider that does not return IdToken and only allows access to UserInfo , then you can have your endpoint working with only the injected SecurityIdentity for both Google and GitHub flows. A simple augmentation of SecurityIdentity will be required where a principal created with the internally-generated IdToken will be replaced with the UserInfo -based principal when the GitHub flow is active: package io.quarkus.it.keycloak; import java.security.Principal; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.UserInfo; import io.quarkus.security.identity.AuthenticationRequestContext; import io.quarkus.security.identity.SecurityIdentity; import io.quarkus.security.identity.SecurityIdentityAugmentor; import io.quarkus.security.runtime.QuarkusSecurityIdentity; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomSecurityIdentityAugmentor implements SecurityIdentityAugmentor { @Override public Uni<SecurityIdentity> augment(SecurityIdentity identity, AuthenticationRequestContext context) { RoutingContext routingContext = identity.getAttribute(RoutingContext.class.getName()); if (routingContext != null && routingContext.normalizedPath().endsWith("/github")) { QuarkusSecurityIdentity.Builder builder = QuarkusSecurityIdentity.builder(identity); UserInfo userInfo = identity.getAttribute("userinfo"); builder.setPrincipal(new Principal() { @Override public String getName() { return userInfo.getString("preferred_username"); } }); identity = builder.build(); } return Uni.createFrom().item(identity); } } Now, the following code will work when the user signs into your application by using Google or GitHub: import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.security.Authenticated; import io.quarkus.security.identity.SecurityIdentity; @Path("/service") @Authenticated public class TokenResource { @Inject SecurityIdentity identity; @GET @Path("/google") @Produces("application/json") public String getUserName() { return identity.getPrincipal().getName(); } @GET @Path("/github") @Produces("application/json") public String getUserName() { return identity.getPrincipal().getUserName(); } } Possibly a simpler alternative is to inject both @IdToken JsonWebToken and UserInfo and use JsonWebToken when handling the providers that return IdToken and use UserInfo with the providers that do not return IdToken . You must ensure that the callback path you enter in the GitHub OAuth application configuration matches the endpoint path where you want the user to be redirected after a successful GitHub authentication and application authorization. In this case, it has to be set to http:localhost:8080/github/userinfo . 3.2.6. Listening to important authentication events You can register the @ApplicationScoped bean which will observe important OIDC authentication events. When a user logs in for the first time, re-authenticates, or refreshes the session, the listener is updated. In the future, more events might be reported. For example: import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import io.quarkus.oidc.IdTokenCredential; import io.quarkus.oidc.SecurityEvent; import io.quarkus.security.identity.AuthenticationRequestContext; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class SecurityEventListener { public void event(@Observes SecurityEvent event) { String tenantId = event.getSecurityIdentity().getAttribute("tenant-id"); RoutingContext vertxContext = event.getSecurityIdentity().getAttribute(RoutingContext.class.getName()); vertxContext.put("listener-message", String.format("event:%s,tenantId:%s", event.getEventType().name(), tenantId)); } } Tip You can listen to other security events as described in the Observe security events section of the Security Tips and Tricks guide. 3.2.7. Propagating tokens to downstream services For information about Authorization Code Flow access token propagation to downstream services, see the Token Propagation section. 3.3. Integration considerations Your application secured by OIDC integrates in an environment where it can be called from applications. It must work with well-known OIDC providers, run behind HTTP Reverse Proxy, require external and internal access, and so on. This section discusses these considerations. 3.3.1. applications You can check if implementing applications (SPAs) the way it is suggested in the applications section of the "OpenID Connect (OIDC) Bearer token authentication" guide meets your requirements. If you prefer to use SPAs and JavaScript APIs such as Fetch or XMLHttpRequest (XHR) with Quarkus web applications, be aware that OIDC providers might not support cross-origin resource sharing (CORS) for authorization endpoints where the users are authenticated after a redirect from Quarkus. This will lead to authentication failures if the Quarkus application and the OIDC provider are hosted on different HTTP domains, ports, or both. In such cases, set the quarkus.oidc.authentication.java-script-auto-redirect property to false , which will instruct Quarkus to return a 499 status code and a WWW-Authenticate header with the OIDC value. The browser script must set a header to identify the current request as a JavaScript request for a 499 status code to be returned when the quarkus.oidc.authentication.java-script-auto-redirect property is set to false . If the script engine sets an engine-specific request header itself, then you can register a custom quarkus.oidc.JavaScriptRequestChecker bean, which will inform Quarkus if the current request is a JavaScript request. For example, if the JavaScript engine sets a header such as HX-Request: true , then you can have it checked like this: import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.JavaScriptRequestChecker; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomJavaScriptRequestChecker implements JavaScriptRequestChecker { @Override public boolean isJavaScriptRequest(RoutingContext context) { return "true".equals(context.request().getHeader("HX-Request")); } } and reload the last requested page in case of a 499 status code. Otherwise, you must also update the browser script to set the X-Requested-With header with the JavaScript value and reload the last requested page in case of a 499 status code. For example: Future<void> callQuarkusService() async { Map<String, String> headers = Map.fromEntries([MapEntry("X-Requested-With", "JavaScript")]); await http .get("https://localhost:443/serviceCall") .then((response) { if (response.statusCode == 499) { window.location.assign("https://localhost.com:443/serviceCall"); } }); } 3.3.2. Cross-origin resource sharing If you plan to consume this application from a application running on a different domain, you need to configure cross-origin resource sharing (CORS). For more information, see the CORS filter section of the "Cross-origin resource sharing" guide. 3.3.3. Running Quarkus application behind a reverse proxy The OIDC authentication mechanism can be affected if your Quarkus application is running behind a reverse proxy, gateway, or firewall when HTTP Host header might be reset to the internal IP address and HTTPS connection might be terminated, and so on. For example, an authorization code flow redirect_uri parameter might be set to the internal host instead of the expected external one. In such cases, configuring Quarkus to recognize the original headers forwarded by the proxy will be required. For more information, see the Running behind a reverse proxy Vert.x documentation section. For example, if your Quarkus endpoint runs in a cluster behind Kubernetes Ingress, then a redirect from the OIDC provider back to this endpoint might not work because the calculated redirect_uri parameter might point to the internal endpoint address. You can resolve this problem by using the following configuration, where X-ORIGINAL-HOST is set by Kubernetes Ingress to represent the external endpoint address.: quarkus.http.proxy.proxy-address-forwarding=true quarkus.http.proxy.allow-forwarded=false quarkus.http.proxy.enable-forwarded-host=true quarkus.http.proxy.forwarded-host-header=X-ORIGINAL-HOST quarkus.oidc.authentication.force-redirect-https-scheme property can also be used when the Quarkus application is running behind an SSL terminating reverse proxy. 3.3.4. External and internal access to the OIDC provider The OIDC provider externally-accessible authorization, logout, and other endpoints can have different HTTP(S) URLs compared to the URLs auto-discovered or configured relative to the quarkus.oidc.auth-server-url internal URL. In such cases, the endpoint might report an issuer verification failure and redirects to the externally-accessible OIDC provider endpoints might fail. If you work with Keycloak, then start it with a KEYCLOAK_FRONTEND_URL system property set to the externally-accessible base URL. If you work with other OIDC providers, check the documentation of your provider. 3.4. OIDC SAML identity broker If your identity provider does not implement OpenID Connect but only the legacy XML-based SAML2.0 SSO protocol, then Quarkus cannot be used as a SAML 2.0 adapter, similarly to how quarkus-oidc is used as an OIDC adapter. However, many OIDC providers such as Keycloak, Okta, Auth0, and Microsoft ADFS offer OIDC to SAML 2.0 bridges. You can create an identity broker connection to a SAML 2.0 provider in your OIDC provider and use quarkus-oidc to authenticate your users to this SAML 2.0 provider, with the OIDC provider coordinating OIDC and SAML 2.0 communications. As far as Quarkus endpoints are concerned, they can continue using the same Quarkus Security, OIDC API, annotations such as @Authenticated , SecurityIdentity , and so on. For example, assume Okta is your SAML 2.0 provider and Keycloak is your OIDC provider. Here is a typical sequence explaining how to configure Keycloak to broker with the Okta SAML 2.0 provider. First, create a new SAML2 integration in your Okta Dashboard/Applications : For example, name it as OktaSaml : , configure it to point to a Keycloak SAML broker endpoint. At this point, you need to know the name of the Keycloak realm, for example, quarkus , and, assuming that the Keycloak SAML broker alias is saml , enter the endpoint address as http:localhost:8081/realms/quarkus/broker/saml/endpoint . Enter the service provider (SP) entity ID as http:localhost:8081/realms/quarkus , where http://localhost:8081 is a Keycloak base address and saml is a broker alias: , save this SAML integration and note its Metadata URL: , add a SAML provider to Keycloak: First, as usual, create a new realm or import the existing realm to Keycloak . In this case, the realm name has to be quarkus . Now, in the quarkus realm properties, navigate to Identity Providers and add a new SAML provider: Note the alias is set to saml , Redirect URI is http:localhost:8081/realms/quarkus/broker/saml/endpoint and Service provider entity ID is http:localhost:8081/realms/quarkus - these are the same values you entered when creating the Okta SAML integration in the step. Finally, set Service entity descriptor to point to the Okta SAML Integration Metadata URL you noted at the end of the step. , if you want, you can register this Keycloak SAML provider as a default provider by navigating to Authentication/browser/Identity Provider Redirector config and setting both the Alias and Default Identity Provider properties to saml . If you do not configure it as a default provider then, at authentication time, Keycloak offers 2 options: Authenticate with the SAML provider Authenticate directly to Keycloak with the name and password Now, configure the Quarkus OIDC web-app application to point to the Keycloak quarkus realm, quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus . Then, you are ready to start authenticating your Quarkus users to the Okta SAML 2.0 provider by using an OIDC to SAML bridge that is provided by Keycloak OIDC and Okta SAML 2.0 providers. You can configure other OIDC providers to provide a SAML bridge similarly to how it can be done for Keycloak. 3.5. Testing Testing is often tricky when it comes to authentication to a separate OIDC-like server. Quarkus offers several options from mocking to a local run of an OIDC provider. Start by adding the following dependencies to your test project: Using Maven: <dependency> <groupId>net.sourceforge.htmlunit</groupId> <artifactId>htmlunit</artifactId> <exclusions> <exclusion> <groupId>org.eclipse.jetty</groupId> <artifactId>*</artifactId> </exclusion> </exclusions> <scope>test</scope> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> Using Gradle: testImplementation("net.sourceforge.htmlunit:htmlunit") testImplementation("io.quarkus:quarkus-junit5") 3.5.1. Wiremock Add the following dependency: Using Maven: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-oidc-server</artifactId> <scope>test</scope> </dependency> Using Gradle: testImplementation("io.quarkus:quarkus-test-oidc-server") Prepare the REST test endpoints and set application.properties . For example: # keycloak.url is set by OidcWiremockTestResource quarkus.oidc.auth-server-url=USD{keycloak.url}/realms/quarkus/ quarkus.oidc.client-id=quarkus-web-app quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app Finally, write the test code, for example: import static org.junit.jupiter.api.Assertions.assertEquals; import org.junit.jupiter.api.Test; import com.gargoylesoftware.htmlunit.SilentCssErrorHandler; import com.gargoylesoftware.htmlunit.WebClient; import com.gargoylesoftware.htmlunit.html.HtmlForm; import com.gargoylesoftware.htmlunit.html.HtmlPage; import io.quarkus.test.common.QuarkusTestResource; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.oidc.server.OidcWiremockTestResource; @QuarkusTest @QuarkusTestResource(OidcWiremockTestResource.class) public class CodeFlowAuthorizationTest { @Test public void testCodeFlow() throws Exception { try (final WebClient webClient = createWebClient()) { // the test REST endpoint listens on '/code-flow' HtmlPage page = webClient.getPage("http://localhost:8081/code-flow"); HtmlForm form = page.getFormByName("form"); // user 'alice' has the 'user' role form.getInputByName("username").type("alice"); form.getInputByName("password").type("alice"); page = form.getInputByValue("login").click(); assertEquals("alice", page.getBody().asText()); } } private WebClient createWebClient() { WebClient webClient = new WebClient(); webClient.setCssErrorHandler(new SilentCssErrorHandler()); return webClient; } } OidcWiremockTestResource recognizes alice and admin users. The user alice has the user role only by default - it can be customized with a quarkus.test.oidc.token.user-roles system property. The user admin has the user and admin roles by default - it can be customized with a quarkus.test.oidc.token.admin-roles system property. Additionally, OidcWiremockTestResource sets the token issuer and audience to https://service.example.com , which can be customized with quarkus.test.oidc.token.issuer and quarkus.test.oidc.token.audience system properties. OidcWiremockTestResource can be used to emulate all OIDC providers. 3.5.2. Dev Services for Keycloak Using Dev Services for Keycloak is recommended for integration testing against Keycloak. Dev Services for Keycloak will start and initialize a test container: it will create a quarkus realm, a quarkus-app client ( secret secret), and add alice ( admin and user roles) and bob ( user role) users, where all of these properties can be customized. First, prepare application.properties . You can start with a completely empty application.properties file as Dev Services for Keycloak will register quarkus.oidc.auth-server-url pointing to the running test container as well as quarkus.oidc.client-id=quarkus-app and quarkus.oidc.credentials.secret=secret . However, if you already have all the required quarkus-oidc properties configured, then you only need to associate quarkus.oidc.auth-server-url with the prod profile for Dev Services for Keycloak to start a container. For example: %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus If a custom realm file has to be imported into Keycloak before running the tests, then you can configure Dev Services for Keycloak as follows: %prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.keycloak.devservices.realm-path=quarkus-realm.json Finally, write a test code the same way as it is described in the Wiremock section. The only difference is that @QuarkusTestResource is no longer needed: @QuarkusTest public class CodeFlowAuthorizationTest { } 3.5.3. TestSecurity annotation You can use @TestSecurity and @OidcSecurity annotations to test the web-app application endpoint code, which depends on either one of the following injections, or all four: ID JsonWebToken Access JsonWebToken UserInfo OidcConfigurationMetadata For more information, see Use TestingSecurity with injected JsonWebToken . 3.5.4. Checking errors in the logs To see details about the token verification errors, you must enable io.quarkus.oidc.runtime.OidcProvider TRACE level logging: quarkus.log.category."io.quarkus.oidc.runtime.OidcProvider".level=TRACE quarkus.log.category."io.quarkus.oidc.runtime.OidcProvider".min-level=TRACE To see details about the OidcProvider client initialization errors, enable io.quarkus.oidc.runtime.OidcRecorder TRACE level logging: quarkus.log.category."io.quarkus.oidc.runtime.OidcRecorder".level=TRACE quarkus.log.category."io.quarkus.oidc.runtime.OidcRecorder".min-level=TRACE From the quarkus dev console, type j to change the application global log level. 3.6. References OIDC configuration properties Configuring well-known OpenID Connect providers OpenID Connect and OAuth2 client and filters reference guide Dev Services for Keycloak Choosing between OpenID Connect, SmallRye JWT, and OAuth2 authentication mechanisms Combining authentication mechanisms Quarkus Security overview Keycloak documentation OpenID Connect JSON Web Token
|
[
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.discovery-enabled=false Authorization endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/auth quarkus.oidc.authorization-path=/protocol/openid-connect/auth Token endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/token quarkus.oidc.token-path=/protocol/openid-connect/token JWK set endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/certs quarkus.oidc.jwks-path=/protocol/openid-connect/certs UserInfo endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/userinfo quarkus.oidc.user-info-path=/protocol/openid-connect/userinfo Token Introspection endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/token/introspect quarkus.oidc.introspection-path=/protocol/openid-connect/token/introspect End-session endpoint: http://localhost:8180/realms/quarkus/protocol/openid-connect/logout quarkus.oidc.end-session-path=/protocol/openid-connect/logout",
"Metadata is auto-discovered but it does not return an end-session endpoint URL quarkus.oidc.auth-server-url=http://localhost:8180/oidcprovider/account Configure the end-session URL locally. It can be an absolute or relative (to 'quarkus.oidc.auth-server-url') address quarkus.oidc.end-session-path=logout",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.secret=mysecret",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.client-secret.value=mysecret",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app This is a key which will be used to retrieve a secret from the map of credentials returned from CredentialsProvider quarkus.oidc.credentials.client-secret.provider.key=mysecret-key Set it only if more than one CredentialsProvider can be registered quarkus.oidc.credentials.client-secret.provider.name=oidc-credentials-provider",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.client-secret.value=mysecret quarkus.oidc.credentials.client-secret.method=post",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.secret=AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app This is a key which will be used to retrieve a secret from the map of credentials returned from CredentialsProvider quarkus.oidc.credentials.jwt.secret-provider.key=mysecret-key Set it only if more than one CredentialsProvider can be registered quarkus.oidc.credentials.jwt.secret-provider.name=oidc-credentials-provider",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key-file=privateKey.pem",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key-store-file=keystore.jks quarkus.oidc.credentials.jwt.key-store-password=mypassword quarkus.oidc.credentials.jwt.key-password=mykeypassword Private key alias inside the keystore quarkus.oidc.credentials.jwt.key-id=mykeyAlias",
"private_key_jwt client authentication quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus/ quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.jwt.key-file=privateKey.pem This is a token key identifier 'kid' header - set it if your OIDC provider requires it: Note if the key is represented in a JSON Web Key (JWK) format with a `kid` property, then using 'quarkus.oidc.credentials.jwt.token-key-id' is not necessary. quarkus.oidc.credentials.jwt.token-key-id=mykey Use RS512 signature algorithm instead of the default RS256 quarkus.oidc.credentials.jwt.signature-algorithm=RS512 The token endpoint URL is the default audience value, use the base address URL instead: quarkus.oidc.credentials.jwt.audience=USD{quarkus.oidc-client.auth-server-url} custom subject instead of the client id: quarkus.oidc.credentials.jwt.subject=custom-subject custom issuer instead of the client id: quarkus.oidc.credentials.jwt.issuer=custom-issuer",
"Apple provider configuration sets a 'client_secret_post_jwt' authentication method quarkus.oidc.provider=apple quarkus.oidc.client-id=USD{apple.client-id} quarkus.oidc.credentials.jwt.key-file=ecPrivateKey.pem quarkus.oidc.credentials.jwt.token-key-id=USD{apple.key-id} Apple provider configuration sets ES256 signature algorithm quarkus.oidc.credentials.jwt.subject=USD{apple.subject} quarkus.oidc.credentials.jwt.issuer=USD{apple.issuer}",
"quarkus.oidc.tls.verification=certificate-validation Keystore configuration quarkus.oidc.tls.key-store-file=client-keystore.jks quarkus.oidc.tls.key-store-password=USD{key-store-password} Add more keystore properties if needed: #quarkus.oidc.tls.key-store-alias=keyAlias #quarkus.oidc.tls.key-store-alias-password=keyAliasPassword Truststore configuration quarkus.oidc.tls.trust-store-file=client-truststore.jks quarkus.oidc.tls.trust-store-password=USD{trust-store-password} Add more truststore properties if needed: #quarkus.oidc.tls.trust-store-alias=certAlias",
"quarkus.oidc.provider=strava quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.client-secret.value=mysecret quarkus.oidc.credentials.client-secret.method=query",
"quarkus.oidc.introspection-credentials.name=introspection-user-name quarkus.oidc.introspection-credentials.secret=introspection-user-secret",
"package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.common.OidcRequestContextProperties; import io.quarkus.oidc.common.OidcRequestFilter; import io.vertx.mutiny.core.buffer.Buffer; import io.vertx.mutiny.ext.web.client.HttpRequest; @ApplicationScoped @Unremovable public class OidcTokenRequestCustomizer implements OidcRequestFilter { @Override public void filter(HttpRequest<Buffer> request, Buffer buffer, OidcRequestContextProperties contextProps) { OidcConfigurationMetadata metadata = contextProps.get(OidcConfigurationMetadata.class.getName()); 1 // Metadata URI is absolute, request URI value is relative if (metadata.getTokenUri().endsWith(request.uri())) { 2 request.putHeader(\"TokenGrantDigest\", calculateDigest(buffer.toString())); } } private String calculateDigest(String bodyString) { // Apply the required digest algorithm to the body string } }",
"import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.common.OidcEndpoint; import io.quarkus.oidc.common.OidcEndpoint.Type; import io.quarkus.oidc.common.OidcRequestContextProperties; import io.quarkus.oidc.common.OidcRequestFilter; import io.vertx.mutiny.core.buffer.Buffer; import io.vertx.mutiny.ext.web.client.HttpRequest; @ApplicationScoped @Unremovable @OidcEndpoint(value = Type.DISCOVERY) 1 public class OidcDiscoveryRequestCustomizer implements OidcRequestFilter { @Override public void filter(HttpRequest<Buffer> request, Buffer buffer, OidcRequestContextProperties contextProps) { request.putHeader(\"Discovery\", \"OK\"); } }",
"quarkus.oidc.authentication.extra-params.response_mode=query",
"quarkus.oidc.authentication.error-path=/error",
"import jakarta.inject.Inject; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.IdToken; import io.quarkus.security.Authenticated; @Path(\"/web-app\") @Authenticated public class ProtectedResource { @Inject @IdToken JsonWebToken idToken; @GET public String getUserName() { return idToken.getName(); } }",
"import jakarta.inject.Inject; import org.eclipse.microprofile.jwt.JsonWebToken; import io.quarkus.oidc.AccessTokenCredential; import io.quarkus.security.Authenticated; @Path(\"/web-app\") @Authenticated public class ProtectedResource { @Inject JsonWebToken accessToken; // or // @Inject // AccessTokenCredential accessTokenCredential; @GET public String getReservationOnBehalfOfUser() { String rawAccessToken = accessToken.getRawToken(); //or //String rawAccessToken = accessTokenCredential.getToken(); // Use the raw access token to access a remote endpoint. // For example, use RestClient to set this token as a `Bearer` scheme value of the HTTP `Authorization` header: // `Authorization: Bearer rawAccessToken`. return getReservationfromRemoteEndpoint(rawAccesstoken); } }",
"quarkus.oidc.authentication.pkce-required=true quarkus.oidc.authentication.state-secret=eUk1p7UB3nFiXZGUXi0uph1Y9p34YhBU",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=quarkus-app quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.token-state-manager.split-tokens=true quarkus.oidc.token-state-manager.encryption-secret=eUk1p7UB3nFiXZGUXi0uph1Y9p34YhBU",
"package io.quarkus.oidc.test; import jakarta.annotation.Priority; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.inject.Alternative; import jakarta.inject.Inject; import io.quarkus.oidc.AuthorizationCodeTokens; import io.quarkus.oidc.OidcTenantConfig; import io.quarkus.oidc.TokenStateManager; import io.quarkus.oidc.runtime.DefaultTokenStateManager; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped @Alternative @Priority(1) public class CustomTokenStateManager implements TokenStateManager { @Inject DefaultTokenStateManager tokenStateManager; @Override public Uni<String> createTokenState(RoutingContext routingContext, OidcTenantConfig oidcConfig, AuthorizationCodeTokens sessionContent, TokenStateManager.CreateTokenStateRequestContext requestContext) { return tokenStateManager.createTokenState(routingContext, oidcConfig, sessionContent, requestContext) .map(t -> (t + \"|custom\")); } @Override public Uni<AuthorizationCodeTokens> getTokens(RoutingContext routingContext, OidcTenantConfig oidcConfig, String tokenState, TokenStateManager.GetTokensRequestContext requestContext) { if (!tokenState.endsWith(\"|custom\")) { throw new IllegalStateException(); } String defaultState = tokenState.substring(0, tokenState.length() - 7); return tokenStateManager.getTokens(routingContext, oidcConfig, defaultState, requestContext); } @Override public Uni<Void> deleteTokens(RoutingContext routingContext, OidcTenantConfig oidcConfig, String tokenState, TokenStateManager.DeleteTokensRequestContext requestContext) { if (!tokenState.endsWith(\"|custom\")) { throw new IllegalStateException(); } String defaultState = tokenState.substring(0, tokenState.length() - 7); return tokenStateManager.deleteTokens(routingContext, oidcConfig, defaultState, requestContext); } }",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.logout.path=/logout Logged-out users should be returned to the /welcome.html site which will offer an option to re-login: quarkus.oidc.logout.post-logout-path=/welcome.html Only the authenticated users can initiate a logout: quarkus.http.auth.permission.authenticated.paths=/logout quarkus.http.auth.permission.authenticated.policy=authenticated All users can see the Welcome page: quarkus.http.auth.permission.public.paths=/welcome.html quarkus.http.auth.permission.public.policy=permit",
"quarkus.oidc.auth-server-url=https://dev-xxx.us.auth0.com quarkus.oidc.client-id=redacted quarkus.oidc.credentials.secret=redacted quarkus.oidc.application-type=web-app quarkus.oidc.tenant-logout.logout.path=/logout quarkus.oidc.tenant-logout.logout.post-logout-path=/welcome.html Auth0 does not return the `end_session_endpoint` metadata property. Instead, you must configure it: quarkus.oidc.end-session-path=v2/logout Auth0 will not recognize the 'post_logout_redirect_uri' query parameter so ensure it is named as 'returnTo': quarkus.oidc.logout.post-logout-uri-param=returnTo Set more properties if needed. For example, if 'client_id' is provided, then a valid logout URI should be set as the Auth0 Application property, without it - as Auth0 Tenant property: quarkus.oidc.logout.extra-params.client_id=USD{quarkus.oidc.client-id}",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.logout.backchannel.path=/back-channel-logout",
"quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.oidc.client-id=frontend quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app quarkus.oidc.logout.frontchannel.path=/front-channel-logout",
"import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.quarkus.oidc.OidcSession; @Path(\"/service\") public class ServiceResource { @Inject OidcSession oidcSession; @GET @Path(\"logout\") public String logout() { oidcSession.logout().await().indefinitely(); return \"You are logged out\". }",
"quarkus.oidc.provider=github quarkus.oidc.client-id=github_app_clientid quarkus.oidc.credentials.secret=github_app_clientsecret user:email scope is requested by default, use 'quarkus.oidc.authentication.scopes' to request different scopes such as `read:user`. See https://docs.github.com/en/developers/apps/building-oauth-apps/scopes-for-oauth-apps for more information. Consider enabling UserInfo Cache quarkus.oidc.token-cache.max-size=1000 quarkus.oidc.token-cache.time-to-live=5M # Or having UserInfo cached inside IdToken itself quarkus.oidc.cache-user-info-in-idtoken=true",
"import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.oidc.UserInfo; import io.quarkus.security.Authenticated; @Path(\"/github\") @Authenticated public class TokenResource { @Inject UserInfo userInfo; @GET @Path(\"/userinfo\") @Produces(\"application/json\") public String getUserInfo() { return userInfo.getUserInfoString(); } }",
"package io.quarkus.it.keycloak; import java.security.Principal; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.UserInfo; import io.quarkus.security.identity.AuthenticationRequestContext; import io.quarkus.security.identity.SecurityIdentity; import io.quarkus.security.identity.SecurityIdentityAugmentor; import io.quarkus.security.runtime.QuarkusSecurityIdentity; import io.smallrye.mutiny.Uni; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomSecurityIdentityAugmentor implements SecurityIdentityAugmentor { @Override public Uni<SecurityIdentity> augment(SecurityIdentity identity, AuthenticationRequestContext context) { RoutingContext routingContext = identity.getAttribute(RoutingContext.class.getName()); if (routingContext != null && routingContext.normalizedPath().endsWith(\"/github\")) { QuarkusSecurityIdentity.Builder builder = QuarkusSecurityIdentity.builder(identity); UserInfo userInfo = identity.getAttribute(\"userinfo\"); builder.setPrincipal(new Principal() { @Override public String getName() { return userInfo.getString(\"preferred_username\"); } }); identity = builder.build(); } return Uni.createFrom().item(identity); } }",
"import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.quarkus.security.Authenticated; import io.quarkus.security.identity.SecurityIdentity; @Path(\"/service\") @Authenticated public class TokenResource { @Inject SecurityIdentity identity; @GET @Path(\"/google\") @Produces(\"application/json\") public String getUserName() { return identity.getPrincipal().getName(); } @GET @Path(\"/github\") @Produces(\"application/json\") public String getUserName() { return identity.getPrincipal().getUserName(); } }",
"import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import io.quarkus.oidc.IdTokenCredential; import io.quarkus.oidc.SecurityEvent; import io.quarkus.security.identity.AuthenticationRequestContext; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class SecurityEventListener { public void event(@Observes SecurityEvent event) { String tenantId = event.getSecurityIdentity().getAttribute(\"tenant-id\"); RoutingContext vertxContext = event.getSecurityIdentity().getAttribute(RoutingContext.class.getName()); vertxContext.put(\"listener-message\", String.format(\"event:%s,tenantId:%s\", event.getEventType().name(), tenantId)); } }",
"import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.oidc.JavaScriptRequestChecker; import io.vertx.ext.web.RoutingContext; @ApplicationScoped public class CustomJavaScriptRequestChecker implements JavaScriptRequestChecker { @Override public boolean isJavaScriptRequest(RoutingContext context) { return \"true\".equals(context.request().getHeader(\"HX-Request\")); } }",
"Future<void> callQuarkusService() async { Map<String, String> headers = Map.fromEntries([MapEntry(\"X-Requested-With\", \"JavaScript\")]); await http .get(\"https://localhost:443/serviceCall\") .then((response) { if (response.statusCode == 499) { window.location.assign(\"https://localhost.com:443/serviceCall\"); } }); }",
"quarkus.http.proxy.proxy-address-forwarding=true quarkus.http.proxy.allow-forwarded=false quarkus.http.proxy.enable-forwarded-host=true quarkus.http.proxy.forwarded-host-header=X-ORIGINAL-HOST",
"<dependency> <groupId>net.sourceforge.htmlunit</groupId> <artifactId>htmlunit</artifactId> <exclusions> <exclusion> <groupId>org.eclipse.jetty</groupId> <artifactId>*</artifactId> </exclusion> </exclusions> <scope>test</scope> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency>",
"testImplementation(\"net.sourceforge.htmlunit:htmlunit\") testImplementation(\"io.quarkus:quarkus-junit5\")",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-test-oidc-server</artifactId> <scope>test</scope> </dependency>",
"testImplementation(\"io.quarkus:quarkus-test-oidc-server\")",
"keycloak.url is set by OidcWiremockTestResource quarkus.oidc.auth-server-url=USD{keycloak.url}/realms/quarkus/ quarkus.oidc.client-id=quarkus-web-app quarkus.oidc.credentials.secret=secret quarkus.oidc.application-type=web-app",
"import static org.junit.jupiter.api.Assertions.assertEquals; import org.junit.jupiter.api.Test; import com.gargoylesoftware.htmlunit.SilentCssErrorHandler; import com.gargoylesoftware.htmlunit.WebClient; import com.gargoylesoftware.htmlunit.html.HtmlForm; import com.gargoylesoftware.htmlunit.html.HtmlPage; import io.quarkus.test.common.QuarkusTestResource; import io.quarkus.test.junit.QuarkusTest; import io.quarkus.test.oidc.server.OidcWiremockTestResource; @QuarkusTest @QuarkusTestResource(OidcWiremockTestResource.class) public class CodeFlowAuthorizationTest { @Test public void testCodeFlow() throws Exception { try (final WebClient webClient = createWebClient()) { // the test REST endpoint listens on '/code-flow' HtmlPage page = webClient.getPage(\"http://localhost:8081/code-flow\"); HtmlForm form = page.getFormByName(\"form\"); // user 'alice' has the 'user' role form.getInputByName(\"username\").type(\"alice\"); form.getInputByName(\"password\").type(\"alice\"); page = form.getInputByValue(\"login\").click(); assertEquals(\"alice\", page.getBody().asText()); } } private WebClient createWebClient() { WebClient webClient = new WebClient(); webClient.setCssErrorHandler(new SilentCssErrorHandler()); return webClient; } }",
"%prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus",
"%prod.quarkus.oidc.auth-server-url=http://localhost:8180/realms/quarkus quarkus.keycloak.devservices.realm-path=quarkus-realm.json",
"@QuarkusTest public class CodeFlowAuthorizationTest { }",
"quarkus.log.category.\"io.quarkus.oidc.runtime.OidcProvider\".level=TRACE quarkus.log.category.\"io.quarkus.oidc.runtime.OidcProvider\".min-level=TRACE",
"quarkus.log.category.\"io.quarkus.oidc.runtime.OidcRecorder\".level=TRACE quarkus.log.category.\"io.quarkus.oidc.runtime.OidcRecorder\".min-level=TRACE"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/openid_connect_oidc_authentication/security-oidc-code-flow-authentication
|
8.7. abrt
|
8.7. abrt 8.7.1. RHBA-2014:1572 - abrt bug fix and enhancement update Updated abrt packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. The abrt packages provide the Automatic Bug Reporting Tool. Bug Fix BZ# 1084467 Previously, kernel oops messages were deleted by ABRT, and thus the user was not sufficiently informed about the kernel behavior. With this update, kernel oops messages are kept and also supplemented by an explanation. The behavior can be restored in the newly created /etc/abrt/plugins/oops.conf file (man abrt-oops.conf). In addition, this update adds the following Enhancement BZ# 989530 With this update, ABRT messages include machine host name. Users of abrt are advised to upgrade to these updated packages, which fix this bug and add this enhancement.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/abrt
|
Chapter 207. Kubernetes Resources Quota Component
|
Chapter 207. Kubernetes Resources Quota Component Available as of Camel version 2.17 The Kubernetes Resources Quota component is one of Kubernetes Components which provides a producer to execute kubernetes resource quota operations. 207.1. Component Options The Kubernetes Resources Quota component has no options. 207.2. Endpoint Options The Kubernetes Resources Quota endpoint is configured using URI syntax: with the following path and query parameters: 207.2.1. Path Parameters (1 parameters): Name Description Default Type masterUrl Required Kubernetes API server URL String 207.2.2. Query Parameters (20 parameters): Name Description Default Type apiVersion (producer) The Kubernetes API Version to use String dnsDomain (producer) The dns domain, used for ServiceCall EIP String kubernetesClient (producer) Default KubernetesClient to use if provided KubernetesClient operation (producer) Producer operation to do on Kubernetes String portName (producer) The port name, used for ServiceCall EIP String portProtocol (producer) The port protocol, used for ServiceCall EIP tcp String connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean caCertData (security) The CA Cert Data String caCertFile (security) The CA Cert File String clientCertData (security) The Client Cert Data String clientCertFile (security) The Client Cert File String clientKeyAlgo (security) The Key Algorithm used by the client String clientKeyData (security) The Client Key data String clientKeyFile (security) The Client Key file String clientKeyPassphrase (security) The Client Key Passphrase String oauthToken (security) The Auth Token String password (security) Password to connect to Kubernetes String trustCerts (security) Define if the certs we used are trusted anyway or not Boolean username (security) Username to connect to Kubernetes String 207.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean
|
[
"kubernetes-resources-quota:masterUrl"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/kubernetes-resources-quota-component
|
Chapter 5. Configuring the database
|
Chapter 5. Configuring the database 5.1. Using an existing PostgreSQL database If you are using an externally managed PostgreSQL database, you must manually enable the pg_trgm extension for a successful deployment. Use the following procedure to deploy an existing PostgreSQL database. Procedure Create a config.yaml file with the necessary database fields. For example: Example config.yaml file: DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database Create a Secret using the configuration file: Create a QuayRegistry YAML file which marks the postgres component as unmanaged and references the created Secret . For example: Example quayregistry.yaml file apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: postgres managed: false Deploy the registry as detailed in the following sections. 5.1.1. Database configuration This section describes the database configuration fields available for Red Hat Quay deployments. 5.1.1.1. Database URI With Red Hat Quay, connection to the database is configured by using the required DB_URI field. The following table describes the DB_URI configuration field: Table 5.1. Database URI Field Type Description DB_URI (Required) String The URI for accessing the database, including any credentials. Example DB_URI field: postgresql://quayuser:[email protected]:5432/quay 5.1.1.2. Database connection arguments Optional connection arguments are configured by the DB_CONNECTION_ARGS parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS are generic, while others are database specific. The following table describes database connection arguments: Table 5.2. Database connection arguments Field Type Description DB_CONNECTION_ARGS Object Optional connection arguments for the database, such as timeouts and SSL/TLS. .autorollback Boolean Whether to use thread-local connections. Should always be true .threadlocals Boolean Whether to use auto-rollback connections. Should always be true 5.1.1.2.1. PostgreSQL SSL/TLS connection arguments With SSL/TLS, configuration depends on the database you are deploying. The following example shows a PostgreSQL SSL/TLS configuration: DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert The sslmode option determines whether, or with, what priority a secure SSL/TLS TCP/IP connection will be negotiated with the server. There are six modes: Table 5.3. SSL/TLS options Mode Description disable Your configuration only tries non-SSL/TLS connections. allow Your configuration first tries a non-SSL/TLS connection. Upon failure, tries an SSL/TLS connection. prefer (Default) Your configuration first tries an SSL/TLS connection. Upon failure, tries a non-SSL/TLS connection. require Your configuration only tries an SSL/TLS connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. verify-ca Your configuration only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). verify-full Only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted CA and that the requested server hostname matches that in the certificate. For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions . 5.1.1.2.2. MySQL SSL/TLS connection arguments The following example shows a sample MySQL SSL/TLS configuration: DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert Information on the valid connection arguments for MySQL is available at Connecting to the Server Using URI-Like Strings or Key-Value Pairs . 5.1.2. Using the managed PostgreSQL database With Red Hat Quay 3.9, if your database is managed by the Red Hat Quay Operator, updating from Red Hat Quay 3.8 3.9 automatically handles upgrading PostgreSQL 10 to PostgreSQL 13. Important Users with a managed database are required to upgrade their PostgreSQL database from 10 13. If your Red Hat Quay and Clair databases are managed by the Operator, the database upgrades for each component must succeed for the 3.9.0 upgrade to be successful. If either of the database upgrades fail, the entire Red Hat Quay version upgrade fails. This behavior is expected. If you do not want the Red Hat Quay Operator to upgrade your PostgreSQL deployment from PostgreSQL 10 13, you must set the PostgreSQL parameter to managed: false in your quayregistry.yaml file. For more information about setting your database to unmanaged, see Using an existing Postgres database . Important It is highly recommended that you upgrade to PostgreSQL 13. PostgreSQL 10 had its final release on November 10, 2022 and is no longer supported. For more information, see the PostgreSQL Versioning Policy . If you want your PostgreSQL database to match the same version as your Red Hat Enterprise Linux (RHEL) system, see Migrating to a RHEL 8 version of PostgreSQL for RHEL 8 or Migrating to a RHEL 9 version of PostgreSQL for RHEL 9. For more information about the Red Hat Quay 3.8 3.9 procedure, see "Updating Red Hat Quay and the Red Hat Quay and Clair PostgreSQL databases on OpenShift Container Platform". 5.1.2.1. PostgreSQL database recommendations The Red Hat Quay team recommends the following for managing your PostgreSQL database. Database backups should be performed regularly using either the supplied tools on the PostgreSQL image or your own backup infrastructure. The Red Hat Quay Operator does not currently ensure that the PostgreSQL database is backed up. Restoring the PostgreSQL database from a backup must be done using PostgreSQL tools and procedures. Be aware that your Quay pods should not be running while the database restore is in progress. Database disk space is allocated automatically by the Red Hat Quay Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Red Hat Quay installations but might not be sufficient for your use cases. Resizing the database volume is currently not handled by the Red Hat Quay Operator. 5.2. Configuring external Redis Use the content in this section to set up an external Redis deployment. 5.2.1. Using an unmanaged Redis database Use the following procedure to set up an external Redis database. Procedure Create a config.yaml file using the following Redis fields: BUILDLOGS_REDIS: host: quay-server.example.com port: 6379 ssl: false USER_EVENTS_REDIS: host: quay-server.example.com port: 6379 ssl: false Enter the following command to create a secret using the configuration file: USD oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret Create a quayregistry.yaml file that sets the Redis component to unmanaged and references the created secret: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: redis managed: false Deploy the Red Hat Quay registry. Additional resources Redis configuration fields 5.2.2. Using unmanaged Horizontal Pod Autoscalers Horizontal Pod Autoscalers (HPAs) are now included with the Clair , Quay , and Mirror pods, so that they now automatically scale during load spikes. As HPA is configured by default to be managed, the number of Clair , Quay , and Mirror pods is set to two. This facilitates the avoidance of downtime when updating or reconfiguring Red Hat Quay by the Operator or during rescheduling events. 5.2.2.1. Disabling the Horizontal Pod Autoscaler To disable autoscaling or create your own HorizontalPodAutoscaler , specify the component as unmanaged in the QuayRegistry instance. For example: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: horizontalpodautoscaler managed: false 5.2.3. Disabling the Route component Use the following procedure to prevent the Red Hat Quay Operator from creating a route. Procedure Set the component as managed: false in the quayregistry.yaml file: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false Edit the config.yaml file to specify that Red Hat Quay handles SSL/TLS. For example: ... EXTERNAL_TLS_TERMINATION: false ... SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com ... PREFERRED_URL_SCHEME: https ... If you do not configure the unmanaged route correctly, the following error is returned: { { "kind":"QuayRegistry", "namespace":"quay-enterprise", "name":"example-registry", "uid":"d5879ba5-cc92-406c-ba62-8b19cf56d4aa", "apiVersion":"quay.redhat.com/v1", "resourceVersion":"2418527" }, "reason":"ConfigInvalid", "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields" } Note Disabling the default route means you are now responsible for creating a Route , Service , or Ingress in order to access the Red Hat Quay instance. Additionally, whatever DNS you use must match the SERVER_HOSTNAME in the Red Hat Quay config. 5.2.4. Disabling the monitoring component If you install the Red Hat Quay Operator in a single namespace, the monitoring component is automatically set to managed: false . Use the following reference to explicitly disable monitoring. Unmanaged monitoring apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: monitoring managed: false To enable monitoring in this scenario, see Enabling monitoring when the Red Hat Quay Operator is installed in a single namespace . 5.2.5. Disabling the mirroring component To disable mirroring explicitly, use the following YAML configuration: Unmanaged mirroring example YAML configuration apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: mirroring managed: false
|
[
"DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database",
"kubectl create secret generic --from-file config.yaml=./config.yaml config-bundle-secret",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: postgres managed: false",
"DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert",
"DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert",
"BUILDLOGS_REDIS: host: quay-server.example.com port: 6379 ssl: false USER_EVENTS_REDIS: host: quay-server.example.com port: 6379 ssl: false",
"oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: redis managed: false",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: horizontalpodautoscaler managed: false",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false",
"EXTERNAL_TLS_TERMINATION: false SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com PREFERRED_URL_SCHEME: https",
"{ { \"kind\":\"QuayRegistry\", \"namespace\":\"quay-enterprise\", \"name\":\"example-registry\", \"uid\":\"d5879ba5-cc92-406c-ba62-8b19cf56d4aa\", \"apiVersion\":\"quay.redhat.com/v1\", \"resourceVersion\":\"2418527\" }, \"reason\":\"ConfigInvalid\", \"message\":\"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields\" }",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: monitoring managed: false",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: mirroring managed: false"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/configuring-the-database-poc
|
Chapter 11. Migrating
|
Chapter 11. Migrating Warning The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see "Migrating" in the Red Hat build of OpenTelemetry documentation, "Installing" in the Red Hat build of OpenTelemetry documentation, and "Installing" in the distributed tracing platform (Tempo) documentation. If you are already using the Red Hat OpenShift distributed tracing platform (Jaeger) for your applications, you can migrate to the Red Hat build of OpenTelemetry, which is based on the OpenTelemetry open-source project. The Red Hat build of OpenTelemetry provides a set of APIs, libraries, agents, and instrumentation to facilitate observability in distributed systems. The OpenTelemetry Collector in the Red Hat build of OpenTelemetry can ingest the Jaeger protocol, so you do not need to change the SDKs in your applications. Migration from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry requires configuring the OpenTelemetry Collector and your applications to report traces seamlessly. You can migrate sidecar and sidecarless deployments. 11.1. Migrating with sidecars The Red Hat build of OpenTelemetry Operator supports sidecar injection into deployment workloads, so you can migrate from a distributed tracing platform (Jaeger) sidecar to a Red Hat build of OpenTelemetry sidecar. Prerequisites The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster. The Red Hat build of OpenTelemetry is installed. Procedure Configure the OpenTelemetry Collector as a sidecar. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: "tempo-<example>-gateway:8090" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp] 1 This endpoint points to the Gateway of a TempoStack instance deployed by using the <example> Tempo Operator. Create a service account for running your application. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar Create a cluster role for the permissions needed by some processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: ["config.openshift.io"] resources: ["infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The resourcedetectionprocessor requires permissions for infrastructures and infrastructures/status. Create a ClusterRoleBinding to set the permissions for the service account. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector as a sidecar. Remove the injected Jaeger Agent from your application by removing the "sidecar.jaegertracing.io/inject": "true" annotation from your Deployment object. Enable automatic injection of the OpenTelemetry sidecar by adding the sidecar.opentelemetry.io/inject: "true" annotation to the .spec.template.metadata.annotations field of your Deployment object. Use the created service account for the deployment of your application to allow the processors to get the correct information and add it to your traces. 11.2. Migrating without sidecars You can migrate from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry without sidecar deployment. Prerequisites The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster. The Red Hat build of OpenTelemetry is installed. Procedure Configure OpenTelemetry Collector deployment. Create the project where the OpenTelemetry Collector will be deployed. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account for running the OpenTelemetry Collector instance. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability Create a cluster role for setting the required permissions for the processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 Permissions for the pods and namespaces resources are required for the k8sattributesprocessor . 2 Permissions for infrastructures and infrastructures/status are required for resourcedetectionprocessor . Create a ClusterRoleBinding to set the permissions for the service account. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the OpenTelemetry Collector instance. Note This collector will export traces to a TempoStack instance. You must create your TempoStack instance by using the Red Hat Tempo Operator and place here the correct endpoint. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-example-gateway:8090" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] Point your tracing endpoint to the OpenTelemetry Operator. If you are exporting your traces directly from your application to Jaeger, change the API endpoint from the Jaeger endpoint to the OpenTelemetry Collector endpoint. Example of exporting traces by using the jaegerexporter with Golang exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1 1 The URL points to the OpenTelemetry Collector API endpoint.
|
[
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-example-gateway:8090\" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/red_hat_build_of_opentelemetry/dist-tracing-otel-migrating
|
5.5. Issues with Large Number of LUNs
|
5.5. Issues with Large Number of LUNs When a large number of LUNs are added to a node, using multipathed devices can significantly increase the time it takes for the udev device manager to create device nodes for them. If you experience this problem, you can correct it by deleting the following line in /etc/udev/rules.d/40-multipath.rules : This line causes the udev device manager to run multipath every time a block device is added to the node. Even with this line removed, the multipathd daemon will still automatically create multipathed devices, and multipath will still be called during the boot process for nodes with multipathed root file systems. The only change is that multipathed devices will not be automatically created when the multipathd daemon is not running, which should not be a problem for the vast majority of multipath users.
|
[
"KERNEL!=\"dm-[0-9]*\", ACTION==\"add\", PROGRAM==\"/bin/bash -c '/sbin/lsmod | /bin/grep ^dm_multipath'\", RUN+=\"/sbin/multipath -v0 %M:%m\""
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/dm_multipath/many_luns
|
Chapter 1. Installing an on-premise cluster using the Assisted Installer
|
Chapter 1. Installing an on-premise cluster using the Assisted Installer You can install OpenShift Container Platform on on-premise hardware or on-premise VMs by using the Assisted Installer. Installing OpenShift Container Platform by using the Assisted Installer supports x86_64 , AArch64 , ppc64le , and s390x CPU architectures. 1.1. Using the Assisted Installer The Assisted Installer is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console . The Assisted Installer supports the various deployment platforms with a focus on bare metal, Nutanix, and vSphere infrastructures. The Assisted Installer provides installation functionality as a service. This software-as-a-service (SaaS) approach has the following advantages: Web user interface: The web user interface performs cluster installation without the user having to create the installation configuration files manually. No bootstrap node: A bootstrap node is not required when installing with the Assisted Installer. The bootstrapping process executes on a node within the cluster. Hosting: The Assisted Installer hosts: Ignition files The installation configuration A discovery ISO The installer Streamlined installation workflow: Deployment does not require in-depth knowledge of OpenShift Container Platform. The Assisted Installer provides reasonable defaults and provides the installer as a service, which: Eliminates the need to install and run the OpenShift Container Platform installer locally. Ensures the latest version of the installer up to the latest tested z-stream releases. Older versions remain available, if needed. Enables building automation by using the API without the need to run the OpenShift Container Platform installer locally. Advanced networking: The Assisted Installer supports IPv4 networking with SDN and OVN, IPv6 and dual stack networking with OVN only, NMState-based static IP addressing, and an HTTP/S proxy. OVN is the default Container Network Interface (CNI) for OpenShift Container Platform 4.12 and later, but you can use SDN. Preinstallation validation: The Assisted Installer validates the configuration before installation to ensure a high probability of success. The validation process includes the following checks: Ensuring network connectivity Ensuring sufficient network bandwidth Ensuring connectivity to the registry Ensuring time synchronization between cluster nodes Verifying that the cluster nodes meet the minimum hardware requirements Validating the installation configuration parameters REST API: The Assisted Installer has a REST API, enabling automation. The Assisted Installer supports installing OpenShift Container Platform on premises in a connected environment, including with an optional HTTP/S proxy. It can install the following: Highly available OpenShift Container Platform or single-node OpenShift (SNO) OpenShift Container Platform on bare metal, Nutanix, or vSphere with full platform integration, or other virtualization platforms without integration Optional: OpenShift Virtualization, multicluster engine, Logical Volume Manager (LVM) Storage, and OpenShift Data Foundation Note Currently, OpenShift Virtualization and LVM Storage are not supported on IBM Z(R) ( s390x ) architecture. The user interface provides an intuitive interactive workflow where automation does not exist or is not required. Users may also automate installations using the REST API. See the Assisted Installer for OpenShift Container Platform documentation for details. 1.2. API support for the Assisted Installer Supported APIs for the Assisted Installer are stable for a minimum of three months from the announcement of deprecation.
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on-premise_with_assisted_installer/installing-on-prem-assisted
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.