title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 2. Distributed tracing architecture
Chapter 2. Distributed tracing architecture 2.1. Distributed tracing architecture Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response. Red Hat OpenShift distributed tracing platform lets you perform distributed tracing, which records the path of a request through various microservices that make up an application. Distributed tracing is a technique that is used to tie the information about different units of work together - usually executed in different processes or hosts - to understand a whole chain of events in a distributed transaction. Developers can visualize call flows in large microservice architectures with distributed tracing. It is valuable for understanding serialization, parallelism, and sources of latency. Red Hat OpenShift distributed tracing platform records the execution of individual requests across the whole stack of microservices, and presents them as traces. A trace is a data/execution path through the system. An end-to-end trace is comprised of one or more spans. A span represents a logical unit of work in Red Hat OpenShift distributed tracing platform that has an operation name, the start time of the operation, and the duration, as well as potentially tags and logs. Spans may be nested and ordered to model causal relationships. 2.1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis 2.1.2. Red Hat OpenShift distributed tracing platform features Red Hat OpenShift distributed tracing platform provides the following capabilities: Integration with Kiali - When properly configured, you can view distributed tracing platform data from the Kiali console. High scalability - The distributed tracing platform back end is designed to have no single points of failure and to scale with the business needs. Distributed Context Propagation - Enables you to connect data from different components together to create a complete end-to-end trace. Backwards compatibility with Zipkin - Red Hat OpenShift distributed tracing platform has APIs that enable it to be used as a drop-in replacement for Zipkin, but Red Hat is not supporting Zipkin compatibility in this release. 2.1.3. Red Hat OpenShift distributed tracing platform architecture Red Hat OpenShift distributed tracing platform is made up of several components that work together to collect, store, and display tracing data. Red Hat OpenShift distributed tracing platform (Tempo) - This component is based on the open source Grafana Tempo project . Gateway - The Gateway handles authentication, authorization, and forwarding requests to the Distributor or Query front-end service. Distributor - The Distributor accepts spans in multiple formats including Jaeger, OpenTelemetry, and Zipkin. It routes spans to Ingesters by hashing the traceID and using a distributed consistent hash ring. Ingester - The Ingester batches a trace into blocks, creates bloom filters and indexes, and then flushes it all to the back end. Query Frontend - The Query Frontend is responsible for sharding the search space for an incoming query. The search query is then sent to the Queriers. The Query Frontend deployment exposes the Jaeger UI through the Tempo Query sidecar. Querier - The Querier is responsible for finding the requested trace ID in either the Ingesters or the back-end storage. Depending on parameters, it can query the Ingesters and pull Bloom indexes from the back end to search blocks in object storage. Compactor - The Compactors stream blocks to and from the back-end storage to reduce the total number of blocks. Red Hat build of OpenTelemetry - This component is based on the open source OpenTelemetry project . OpenTelemetry Collector - The OpenTelemetry Collector is a vendor-agnostic way to receive, process, and export telemetry data. The OpenTelemetry Collector supports open-source observability data formats, for example, Jaeger and Prometheus, sending to one or more open-source or commercial back-ends. The Collector is the default location instrumentation libraries export their telemetry data. Red Hat OpenShift distributed tracing platform (Jaeger) - This component is based on the open source Jaeger project . Important The Red Hat OpenShift distributed tracing platform (Jaeger) is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . Users must migrate to the Tempo Operator and the Red Hat build of OpenTelemetry for distributed tracing collection and storage. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Client (Jaeger client, Tracer, Reporter, instrumented application, client libraries)- The distributed tracing platform (Jaeger) clients are language-specific implementations of the OpenTracing API. They might be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing. Agent (Jaeger agent, Server Queue, Processor Workers) - The distributed tracing platform (Jaeger) agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the Collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments such as Kubernetes. Jaeger Collector (Collector, Queue, Workers) - Similar to the Jaeger agent, the Jaeger Collector receives spans and places them in an internal queue for processing. This allows the Jaeger Collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage. Storage (Data Store) - Collectors require a persistent storage backend. Red Hat OpenShift distributed tracing platform (Jaeger) has a pluggable mechanism for span storage. Red Hat OpenShift distributed tracing platform (Jaeger) supports the Elasticsearch storage. Query (Query Service) - Query is a service that retrieves traces from storage. Ingester (Ingester Service) - Red Hat OpenShift distributed tracing platform can use Apache Kafka as a buffer between the Collector and the actual Elasticsearch backing storage. Ingester is a service that reads data from Kafka and writes to the Elasticsearch storage backend. Jaeger Console - With the Red Hat OpenShift distributed tracing platform (Jaeger) user interface, you can visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace. 2.1.4. Additional resources Red Hat build of OpenTelemetry
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/distributed_tracing/distributed-tracing-architecture
Chapter 15. EL
Chapter 15. EL Overview The Unified Expression Language (EL) was originally specified as part of the JSP 2.1 standard (JSR-245), but it is now available as a standalone language. Apache Camel integrates with JUEL ( http://juel.sourceforge.net/ ), which is an open source implementation of the EL language. Adding JUEL package To use EL in your routes you need to add a dependency on camel-juel to your project as shown in Example 15.1, "Adding the camel-juel dependency" . Example 15.1. Adding the camel-juel dependency Static import To use the el() static method in your application code, include the following import statement in your Java source files: Variables Table 15.1, "EL variables" lists the variables that are accessible when using EL. Table 15.1. EL variables Variable Type Value exchange org.apache.camel.Exchange The current Exchange in org.apache.camel.Message The IN message out org.apache.camel.Message The OUT message Example Example 15.2, "Routes using EL" shows two routes that use EL. Example 15.2. Routes using EL
[ "<!-- Maven POM File --> <properties> <camel-version>2.23.2.fuse-7_13_0-00013-redhat-00001</camel-version> </properties> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-juel</artifactId> <version>USD{camel-version}</version> </dependency> </dependencies>", "import static org.apache.camel.language.juel.JuelExpression.el;", "<camelContext> <route> <from uri=\"seda:foo\"/> <filter> <language language=\"el\">USD{in.headers.foo == 'bar'}</language> <to uri=\"seda:bar\"/> </filter> </route> <route> <from uri=\"seda:foo2\"/> <filter> <language language=\"el\">USD{in.headers['My Header'] == 'bar'}</language> <to uri=\"seda:bar\"/> </filter> </route> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/el
5.3. Adding Red Hat Gluster Storage Server to the Cluster
5.3. Adding Red Hat Gluster Storage Server to the Cluster There are two ways to add hosts. You can either add new hosts or import the existing gluster configuration into a cluster. To import an existing gluster configuration: Select Enable Gluster Service . Select Import existing gluster configuration . With this option you can import the existing Gluster configurations into a cluster. Provide the IP address of one of the hosts. To add new hosts: Use the drop-down lists to select the Data Center and Host Cluster for the new host. Click OK . The new host displays in the list of hosts with a status of Installing . The host is activated and the status changes to Up automatically. You can manage the lifecycle of a volume using hook scripts. Note Install cockpit using # yum install cockpit , before adding the Red Hat Gluster Storage 3.5 node to Red Hat Virtualization Manager 4.3 in 4.3 compatible cluster. Note To add multiple servers to a cluster, you must first add a Red Hat Gluster Storage server to the cluster. An error message appears if you add multiple servers in the first attempt. Figure 5.3. New Host window
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/adding_red_hat_storage_server_to_the_cluster
Chapter 2. New Features
Chapter 2. New Features This section describes new features introduced in Red Hat OpenShift Data Foundation 4.13. 2.1. General availability of disaster recovery with stretch clusters solution With this release, disaster recovery with stretch clusters is generally available. In a high availability stretch cluster solution, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This solution is deployed in the OpenShift Container Platform on-premise data centers. This solution is designed to be deployed where latencies do not exceed 5 ms between zones, with a maximum round-trip time (RTT) of 10 ms between locations of the two zones that are residing in the main on-premise data centers. For more information, see Disaster recovery with stretch cluster for OpenShift Data Foundation . 2.2. General availability of support for Network File System OpenShift Data Foundation supports the Network File System (NFS) service for any internal or external applications running in any operating system (OS) except Mac and Windows OS. The NFS service helps to migrate data from any environment to the OpenShift environment, for example, data migration from Red Hat Gluster Storage file system to OpenShift environment. For more information, see Creating exports using NFS . 2.3. Support for enabling in-transit encryption for OpenShift Data Foundation With this release, OpenShift Data Foundation provides a security enhancement to secure network operations by encrypting all the data moving through the network and systems. The enhanced security is provided using encryption in-transit through Ceph's messenger v2 protocol. For more information about how to enable in-transit encryption, see the required Deploying OpenShift Data Foundation guide based on the platform. 2.4. Support for Azure Red Hat OpenShift With this release, you can use the unmanaged OpenShift Data Foundation on Microsoft Azure on Red Hat OpenShift, which is a managed OpenShift platform on Azure. However, note that OpenShift 4.12, 4.13 versions are not yet available in Azure on Red Hat OpenShift, hence the support here is only for OpenShift Data Foundation 4.10 and 4.11. For more information, see Deploying OpenShift Data Foundation 4.10 using Microsoft Azure and Azure Red Hat OpenShift and Deploying OpenShift Data Foundation 4.11 using Microsoft Azure and Azure Red Hat OpenShift . 2.5. Support agnostic deployment of OpenShift Data Foundation on any OpenShift supported platform This release supports and provides a flexible hosting environment for seamless deployment and upgrade of OpenShift Data Foundation. For more information, see Deploying OpenShift Data Foundation on any platform . 2.6. Support installer provisioned infrastructure deployment of OpenShift Data Foundation using bare metal infrastructure With this release, installer provisioned infrastructure deployment of OpenShift Data Foundation using bare metal infrastructure is fully supported. For more information, see Deploying OpenShift Data Foundation using bare metal infrastructure and Scaling storage . 2.7. OpenShift Data Foundation topology in OpenShift Console OpenShift Data Foundation topology provides administrators with rapid observability into important cluster interactions and overall cluster health. This improves the customer experience and their ability to streamline operations to effectively leverage OpenShift Data Foundation to its maximum capabilities. For more information, see the View OpenShift Data Foundation Topology section in any of the Deploying OpenShift Data Foundation guides based on the platform. 2.8. General availability of Persistent Volume encryption - service account per namespace OpenShift Data Foundation now provides access to a service account in every OpenShift Container Platform namespace to authenticate with Vault using a Kubernetes service account token. The service account is thus used for KMS authentication for encrypting Persistent Volumes. For more information, see Data encryption options and Configuring access to KMS using vaulttenantsa . 2.9. Support OpenShift dual stack with ODF using IPv4 In Openshift Data Foundation single stack, you can either use IPv4 or IPv6. In case OpenShift is configured with dual stack, OpenShift Data Foundation uses IPv4 and this combination is supported. For more information, see Network requirements . 2.10. Support for bucket replication deletion When creating a bucket replication policy, you now have the option to enable deletion so that when data is deleted from the source bucket, the data is deleted from the destination bucket as well. This feature requires logs-based replication, which is currently only supported using AWS. For more information, see Enabling bucket replication deletion . 2.11. Disaster recovery monitoring dashboard This feature provides reference information to understand the health of disaster recovery (DR) replication relationships such as the following: Application level DR health Cluster level DR health Failover and relocation operation status Replication lag status Alerts For more information, see Monitoring disaster recovery health .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/4.13_release_notes/new_features
2.8. ss
2.8. ss ss is a command-line utility that prints statistical information about sockets, allowing administrators to assess device performance over time. By default, ss lists open non-listening TCP sockets that have established connections, but a number of useful options are provided to help administrators filter out statistics about specific sockets. Red Hat recommends using ss over netstat in Red Hat Enterprise Linux 7. One common usage is ss -tmpie which displays detailed information (including internal information) about TCP sockets, memory usage, and processes using the socket. ss is provided by the iproute package. For more information, see the man page:
[ "man ss" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-performance_monitoring_tools-ss
Chapter 22. User Authentication
Chapter 22. User Authentication This chapter describes managing user authentication mechanisms, including information on how to manage users' passwords, SSH keys, and certificates, or how to configure one-time password (OTP) and smart-card authentication. Note For documentation on how to log in to Identity Management (IdM) using Kerberos, see Chapter 5, The Basics of Managing the IdM Server and Services . 22.1. User Passwords 22.1.1. Changing and Resetting User Passwords Regular users without the permission to change other users' passwords can change only their own personal password. Personal passwords changed in this way: Must meet the IdM password policies. For details on configuring password policies, see Chapter 28, Defining Password Policies . Administrators and users with password change rights can set initial passwords for new users and reset passwords for existing users. Passwords changed in this way: Do not have to meet the IdM password policies Expire after the first successful login. When this happens, IdM prompts the user to change the expired password immediately. To disable this behavior, see Section 22.1.2, "Enabling Password Reset Without Prompting for a Password Change at the Login" . Note The LDAP Directory Manager (DM) user can change user passwords using LDAP tools. The new password can override any IdM password policies. Passwords set by DM do not expire after the first login. 22.1.1.1. Web UI: Changing Your Own Personal Password In the top right corner, click User name Change password . Figure 22.1. Resetting Password Enter the new password. 22.1.1.2. Web UI: Resetting Another User's Password Select Identity Users . Click the name of the user to edit. Click Actions Reset password . Figure 22.2. Resetting Password Enter the new password, and click Reset Password . Figure 22.3. Confirming New Password 22.1.1.3. Command Line: Changing or Resetting Another User's Password To change your own personal password or to change or reset another user's password, add the --password option to the ipa user-mod command. The command will prompt you for the new password. 22.1.2. Enabling Password Reset Without Prompting for a Password Change at the Login By default, when an administrator resets another user's password, the password expires after the first successful login. See Section 22.1.1, "Changing and Resetting User Passwords" for details. To ensure that passwords set by administrators do not expire when used for the first time, make these changes on every Identity Management server in the domain: Edit the password synchronization entry: cn=ipa_pwd_extop,cn=plugins,cn=config . Specify the administrative user accounts in the passSyncManagersDNs attribute. The attribute is multi-valued. For example, to specify the admin user by using the ldapmodify utility: Warning Specify only the users who require these additional privileges. All users listed under passSyncManagerDNs can: Perform password change operations without requiring a subsequent password reset Bypass the password policy so that no strength or history enforcement is applied 22.1.3. Unlocking User Accounts After Password Failures If a user attempts to log in using an incorrect password a certain number of times, IdM will lock the user account, which prevents the user from logging in. Note that IdM does not display any warning message that the user account has been locked. Note For information on setting the exact number of allowed failed attempts and the duration of the lockout, see Chapter 28, Defining Password Policies . IdM automatically unlocks the user account after a specified amount of time has passed. Alternatively, the administrator can unlock the user account manually. Unlocking a User Account Manually To unlock a user account, use the ipa user-unlock command. After this, the user is able to log in again. 22.1.3.1. Checking the Status of a User Account To display the number of failed login attempts for a user, use the ipa user-status command. If the displayed number exceeds the number of allowed failed login attempts, the user account is locked. By default, IdM unning on Red Hat Enterprise Linux 7.4 and later does not store the time stamp of the last successful Kerberos authentication of a user. To enable this feature, see Section 22.2, "Enabling Tracking of Last Successful Kerberos Authentication" .
[ "ipa user-mod user --password Password: Enter Password again to verify: -------------------- Modified user \"user\" --------------------", "ldapmodify -x -D \"cn=Directory Manager\" -W -h ldap.example.com -p 389 dn: cn=ipa_pwd_extop,cn=plugins,cn=config changetype: modify add: passSyncManagersDNs passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com", "ipa user-unlock user ----------------------- Unlocked account \"user\" -----------------------", "ipa user-status user ----------------------- Account disabled: False ----------------------- Server: example.com Failed logins: 8 Last successful authentication: 20160229080309Z Last failed authentication: 20160229080317Z Time now: 2016-02-29T08:04:46Z ---------------------------- Number of entries returned 1 ----------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/user-authentication
Chapter 1. Overview
Chapter 1. Overview The API exposes a list of endpoints to query Red Hat Product Life Cycle data with certain parameters. This is the 1.0 version. Base URL Supported Formats The API supports JSON format.
[ "https://access.redhat.com/product-life-cycles/api/v1/" ]
https://docs.redhat.com/en/documentation/red_hat_product_life_cycle_data_api/1.0/html/red_hat_product_life_cycle_data_api/overview
Chapter 2. Fault tolerant deployments using multiple Prism Elements
Chapter 2. Fault tolerant deployments using multiple Prism Elements By default, the installation program installs control plane and compute machines into a single Nutanix Prism Element (cluster). To improve the fault tolerance of your OpenShift Container Platform cluster, you can specify that these machines be distributed across multiple Nutanix clusters by configuring failure domains. A failure domain represents an additional Prism Element instance that is available to OpenShift Container Platform machine pools during and after installation. 2.1. Installation method and failure domain configuration The OpenShift Container Platform installation method determines how and when you configure failure domains: If you deploy using installer-provisioned infrastructure, you can configure failure domains in the installation configuration file before deploying the cluster. For more information, see Configuring failure domains . You can also configure failure domains after the cluster is deployed. For more information about configuring failure domains post-installation, see Adding failure domains to an existing Nutanix cluster . If you deploy using infrastructure that you manage (user-provisioned infrastructure) no additional configuration is required. After the cluster is deployed, you can manually distribute control plane and compute machines across failure domains. 2.2. Adding failure domains to an existing Nutanix cluster By default, the installation program installs control plane and compute machines into a single Nutanix Prism Element (cluster). After an OpenShift Container Platform cluster is deployed, you can improve its fault tolerance by adding additional Prism Element instances to the deployment using failure domains. A failure domain represents a single Prism Element instance where new control plane and compute machines can be deployed and existing control plane and compute machines can be distributed. 2.2.1. Failure domain requirements When planning to use failure domains, consider the following requirements: All Nutanix Prism Element instances must be managed by the same instance of Prism Central. A deployment that is comprised of multiple Prism Central instances is not supported. The machines that make up the Prism Element clusters must reside on the same Ethernet network for failure domains to be able to communicate with each other. A subnet is required in each Prism Element that will be used as a failure domain in the OpenShift Container Platform cluster. When defining these subnets, they must share the same IP address prefix (CIDR) and should contain the virtual IP addresses that the OpenShift Container Platform cluster uses. 2.2.2. Adding failure domains to the Infrastructure CR You add failure domains to an existing Nutanix cluster by modifying its Infrastructure custom resource (CR) ( infrastructures.config.openshift.io ). Tip To ensure high-availability, configure three failure domains. Procedure Edit the Infrastructure CR by running the following command: USD oc edit infrastructures.config.openshift.io cluster Configure the failure domains. Example Infrastructure CR with Nutanix failure domains spec: cloudConfig: key: config name: cloud-provider-config #... platformSpec: nutanix: failureDomains: - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> # ... where: <uuid> Specifies the universally unique identifier (UUID) of the Prism Element. <failure_domain_name> Specifies a unique name for the failure domain. The name is limited to 64 or fewer characters, which can include lower-case letters, digits, and a dash ( - ). The dash cannot be in the leading or ending position of the name. <network_uuid> Specifies one or more UUID for the Prism Element subnet object. The CIDR IP address prefix for one of the specified subnets must contain the virtual IP addresses that the OpenShift Container Platform cluster uses. Important Configuring multiple subnets is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To configure multiple subnets in the Infrastructure CR, you must enable the NutanixMultiSubnets feature gate. A maximum of 32 subnets for each failure domain (Prism Element) in an OpenShift Container Platform cluster is supported. All subnet UUID values must be unique. Save the CR to apply the changes. 2.2.3. Distributing control planes across failure domains You distribute control planes across Nutanix failure domains by modifying the control plane machine set custom resource (CR). Prerequisites You have configured the failure domains in the cluster's Infrastructure custom resource (CR). The control plane machine set custom resource (CR) is in an active state. For more information on checking the control plane machine set custom resource state, see "Additional resources". Procedure Edit the control plane machine set CR by running the following command: USD oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api Configure the control plane machine set to use failure domains by adding a spec.template.machines_v1beta1_machine_openshift_io.failureDomains stanza. Example control plane machine set with Nutanix failure domains apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: cluster namespace: openshift-machine-api spec: # ... template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: Nutanix nutanix: - name: <failure_domain_name_1> - name: <failure_domain_name_2> - name: <failure_domain_name_3> # ... Save your changes. By default, the control plane machine set propagates changes to your control plane configuration automatically. If the cluster is configured to use the OnDelete update strategy, you must replace your control planes manually. For more information, see "Additional resources". Additional resources Checking the control plane machine set custom resource state Replacing a control plane machine 2.2.4. Distributing compute machines across failure domains You can distribute compute machines across Nutanix failure domains one of the following ways: Editing existing compute machine sets allows you to distribute compute machines across Nutanix failure domains as a minimal configuration update. Replacing existing compute machine sets ensures that the specification is immutable and all your machines are the same. 2.2.4.1. Editing compute machine sets to implement failure domains To distribute compute machines across Nutanix failure domains by using an existing compute machine set, you update the compute machine set with your configuration and then use scaling to replace the existing compute machines. Prerequisites You have configured the failure domains in the cluster's Infrastructure custom resource (CR). Procedure Run the following command to view the cluster's Infrastructure CR. USD oc describe infrastructures.config.openshift.io cluster For each failure domain ( platformSpec.nutanix.failureDomains ), note the cluster's UUID, name, and subnet object UUID. These values are required to add a failure domain to a compute machine set. List the compute machine sets in your cluster by running the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <machine_set_name_1> 1 1 1 1 55m <machine_set_name_2> 1 1 1 1 55m Edit the first compute machine set by running the following command: USD oc edit machineset <machine_set_name_1> -n openshift-machine-api Configure the compute machine set to use the first failure domain by updating the following to the spec.template.spec.providerSpec.value stanza. Note Be sure that the values you specify for the cluster and subnets fields match the values that were configured in the failureDomains stanza in the cluster's Infrastructure CR. Example compute machine set with Nutanix failure domains apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 # ... template: spec: # ... providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1> # ... Note the value of spec.replicas , because you need it when scaling the compute machine set to apply the changes. Save your changes. List the machines that are managed by the updated compute machine set by running the following command: USD oc get -n openshift-machine-api machines \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1> Example output NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running AHV Unnamed Development-STS 4h <machine_name_original_2> Running AHV Unnamed Development-STS 4h For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command: USD oc annotate machine/<machine_name_original_1> \ -n openshift-machine-api \ machine.openshift.io/delete-machine="true" To create replacement machines with the new configuration, scale the compute machine set to twice the number of replicas by running the following command: USD oc scale --replicas=<twice_the_number_of_replicas> \ 1 machineset <machine_set_name_1> \ -n openshift-machine-api 1 For example, if the original number of replicas in the compute machine set is 2 , scale the replicas to 4 . List the machines that are managed by the updated compute machine set by running the following command: USD oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1> When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas. To remove the machines that were created with the old configuration, scale the compute machine set to the original number of replicas by running the following command: USD oc scale --replicas=<original_number_of_replicas> \ 1 machineset <machine_set_name_1> \ -n openshift-machine-api 1 For example, if the original number of replicas in the compute machine set was 2 , scale the replicas to 2 . As required, continue to modify machine sets to reference the additional failure domains that are available to the deployment. Additional resources Modifying a compute machine set 2.2.4.2. Replacing compute machine sets to implement failure domains To distribute compute machines across Nutanix failure domains by replacing a compute machine set, you create a new compute machine set with your configuration, wait for the machines that it creates to start, and then delete the old compute machine set. Prerequisites You have configured the failure domains in the cluster's Infrastructure custom resource (CR). Procedure Run the following command to view the cluster's Infrastructure CR. USD oc describe infrastructures.config.openshift.io cluster For each failure domain ( platformSpec.nutanix.failureDomains ), note the cluster's UUID, name, and subnet object UUID. These values are required to add a failure domain to a compute machine set. List the compute machine sets in your cluster by running the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m Note the names of the existing compute machine sets. Create a YAML file that contains the values for your new compute machine set custom resource (CR) by using one of the following methods: Copy an existing compute machine set configuration into a new file by running the following command: USD oc get machineset <original_machine_set_name_1> \ -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml You can edit this YAML file with your preferred text editor. Create a blank YAML file named <new_machine_set_name_1>.yaml with your preferred text editor and include the required values for your new compute machine set. If you are not sure which value to set for a specific field, you can view values of an existing compute machine set CR by running the following command: USD oc get machineset <original_machine_set_name_1> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create machines with a worker or infra role. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Configure the new compute machine set to use the first failure domain by updating or adding the following to the spec.template.spec.providerSpec.value stanza in the <new_machine_set_name_1>.yaml file. Note Be sure that the values you specify for the cluster and subnets fields match the values that were configured in the failureDomains stanza in the cluster's Infrastructure CR. Example compute machine set with Nutanix failure domains apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <new_machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 # ... template: spec: # ... providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1> # ... Save your changes. Create a compute machine set CR by running the following command: USD oc create -f <new_machine_set_name_1>.yaml As required, continue to create compute machine sets to reference the additional failure domains that are available to the deployment. List the machines that are managed by the new compute machine sets by running the following command for each new compute machine set: USD oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1> Example output NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned AHV Unnamed Development-STS 25s <machine_from_new_2> Provisioning AHV Unnamed Development-STS 25s When the new machines are in the Running phase, you can delete the old compute machine sets that do not include the failure domain configuration. When you have verified that the new machines are in the Running phase, delete the old compute machine sets by running the following command for each: USD oc delete machineset <original_machine_set_name_1> -n openshift-machine-api Verification To verify that the compute machine sets without the updated configuration are deleted, list the compute machine sets in your cluster by running the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <new_machine_set_name_2> 1 1 1 1 4m12s To verify that the compute machines without the updated configuration are deleted, list the machines in your cluster by running the following command: USD oc get -n openshift-machine-api machines Example output while deletion is in progress NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 5m41s <machine_from_new_2> Running AHV Unnamed Development-STS 5m41s <machine_from_original_1> Deleting AHV Unnamed Development-STS 4h <machine_from_original_2> Deleting AHV Unnamed Development-STS 4h Example output when deletion is complete NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 6m30s <machine_from_new_2> Running AHV Unnamed Development-STS 6m30s To verify that a machine created by the new compute machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machine <machine_from_new_1> -n openshift-machine-api Additional resources Creating a compute machine set on Nutanix
[ "oc edit infrastructures.config.openshift.io cluster", "spec: cloudConfig: key: config name: cloud-provider-config # platformSpec: nutanix: failureDomains: - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid> - cluster: type: UUID uuid: <uuid> name: <failure_domain_name> subnets: - type: UUID uuid: <network_uuid>", "oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: cluster namespace: openshift-machine-api spec: template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: Nutanix nutanix: - name: <failure_domain_name_1> - name: <failure_domain_name_2> - name: <failure_domain_name_3>", "oc describe infrastructures.config.openshift.io cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <machine_set_name_1> 1 1 1 1 55m <machine_set_name_2> 1 1 1 1 55m", "oc edit machineset <machine_set_name_1> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1>", "oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running AHV Unnamed Development-STS 4h <machine_name_original_2> Running AHV Unnamed Development-STS 4h", "oc annotate machine/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=<twice_the_number_of_replicas> \\ 1 machineset <machine_set_name_1> -n openshift-machine-api", "oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<machine_set_name_1>", "oc scale --replicas=<original_number_of_replicas> \\ 1 machineset <machine_set_name_1> -n openshift-machine-api", "oc describe infrastructures.config.openshift.io cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <original_machine_set_name_1> 1 1 1 1 55m <original_machine_set_name_2> 1 1 1 1 55m", "oc get machineset <original_machine_set_name_1> -n openshift-machine-api -o yaml > <new_machine_set_name_1>.yaml", "oc get machineset <original_machine_set_name_1> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "apiVersion: machine.openshift.io/v1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <cluster_name> name: <new_machine_set_name_1> namespace: openshift-machine-api spec: replicas: 2 template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1 failureDomain: name: <failure_domain_name_1> cluster: type: uuid uuid: <prism_element_uuid_1> subnets: - type: uuid uuid: <prism_element_network_uuid_1>", "oc create -f <new_machine_set_name_1>.yaml", "oc get -n openshift-machine-api machines -l machine.openshift.io/cluster-api-machineset=<new_machine_set_name_1>", "NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Provisioned AHV Unnamed Development-STS 25s <machine_from_new_2> Provisioning AHV Unnamed Development-STS 25s", "oc delete machineset <original_machine_set_name_1> -n openshift-machine-api", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <new_machine_set_name_1> 1 1 1 1 4m12s <new_machine_set_name_2> 1 1 1 1 4m12s", "oc get -n openshift-machine-api machines", "NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 5m41s <machine_from_new_2> Running AHV Unnamed Development-STS 5m41s <machine_from_original_1> Deleting AHV Unnamed Development-STS 4h <machine_from_original_2> Deleting AHV Unnamed Development-STS 4h", "NAME PHASE TYPE REGION ZONE AGE <machine_from_new_1> Running AHV Unnamed Development-STS 6m30s <machine_from_new_2> Running AHV Unnamed Development-STS 6m30s", "oc describe machine <machine_from_new_1> -n openshift-machine-api" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_nutanix/nutanix-failure-domains
Chapter 12. Preparing for Installation
Chapter 12. Preparing for Installation 12.1. Preparing for a Network Installation Important The eHEA module fails to initialize if 16 GB huge pages are assigned to a system or partition and the kernel command line does not contain the huge page parameters. Therefore, when you perform a network installation through an IBM eHEA ethernet adapter, you cannot assign huge pages to the system or partition during the installation. Large pages should work. Note Make sure no installation DVD (or any other type of DVD or CD) is in your system's CD or DVD drive if you are performing a network-based installation. Having a DVD or CD in the drive might cause unexpected errors. Ensure that you have boot media available on CD, DVD, or a USB storage device such as a flash drive. The Red Hat Enterprise Linux installation medium must be available for either a network installation (via NFS, FTP, HTTP, or HTTPS) or installation via local storage. Use the following steps if you are performing an NFS, FTP, HTTP, or HTTPS installation. The NFS, FTP, HTTP, or HTTPS server to be used for installation over the network must be a separate, network-accessible server. It must provide the complete contents of the installation DVD-ROM. Note anaconda has the ability to test the integrity of the installation media. It works with the DVD, hard drive ISO, and NFS ISO installation methods. Red Hat recommends that you test all installation media before starting the installation process, and before reporting any installation-related bugs (many of the bugs reported are actually due to improperly-burned DVDs). To use this test, type the following command at the yaboot: prompt: Note The public directory used to access the installation files over FTP, NFS, HTTP, or HTTPS is mapped to local storage on the network server. For example, the local directory /var/www/inst/rhel6.9 on the network server can be accessed as http://network.server.com/inst/rhel6.9 . In the following examples, the directory on the installation staging server that will contain the installation files will be specified as /location/of/disk/space . The directory that will be made publicly available via FTP, NFS, HTTP, or HTTPS will be specified as /publicly_available_directory . For example, /location/of/disk/space may be a directory you create called /var/isos . /publicly_available_directory might be /var/www/html/rhel6.9 , for an HTTP install. In the following, you will require an ISO image . An ISO image is a file containing an exact copy of the content of a DVD. To create an ISO image from a DVD use the following command: where dvd is your DVD drive device, name_of_image is the name you give to the resulting ISO image file, and path_to_image is the path to the location on your system where the resulting ISO image will be stored. To copy the files from the installation DVD to a Linux instance, which acts as an installation staging server, continue with either Section 12.1.1, "Preparing for FTP, HTTP, and HTTPS Installation" or Section 12.1.2, "Preparing for an NFS Installation" . 12.1.1. Preparing for FTP, HTTP, and HTTPS Installation Warning If your Apache web server or tftp FTP server configuration enables SSL security, make sure to only enable the TLSv1 protocol, and disable SSLv2 and SSLv3 . This is due to the POODLE SSL vulnerability (CVE-2014-3566). See https://access.redhat.com/solutions/1232413 for details about securing Apache , and https://access.redhat.com/solutions/1234773 for information about securing tftp . Extract the files from the ISO image of the installation DVD and place them in a directory that is shared over FTP, HTTP, or HTTPS. , make sure that the directory is shared via FTP, HTTP, or HTTPS, and verify client access. Test to see whether the directory is accessible from the server itself, and then from another machine on the same subnet to which you will be installing. 12.1.2. Preparing for an NFS Installation For NFS installation it is not necessary to extract all the files from the ISO image. It is sufficient to make the ISO image itself, the install.img file, and optionally the product.img file available on the network server via NFS. Transfer the ISO image to the NFS exported directory. On a Linux system, run: where path_to_image is the path to the ISO image file, name_of_image is the name of the ISO image file, and publicly_available_directory is a directory that is available over NFS or that you intend to make available over NFS. Use a SHA256 checksum program to verify that the ISO image that you copied is intact. Many SHA256 checksum programs are available for various operating systems. On a Linux system, run: where name_of_image is the name of the ISO image file. The SHA256 checksum program displays a string of 64 characters called a hash . Compare this hash to the hash displayed for this particular image on the Downloads page in the Red Hat Customer Portal (refer to Chapter 1, Obtaining Red Hat Enterprise Linux ). The two hashes should be identical. Copy the images/ directory from inside the ISO image to the same directory in which you stored the ISO image file itself. Enter the following commands: where path_to_image is the path to the ISO image file, name_of_image is the name of the ISO image file, and mount_point is a mount point on which to mount the image while you copy files from the image. For example: The ISO image file and an images/ directory are now present, side-by-side, in the same directory. Verify that the images/ directory contains at least the install.img file, without which installation cannot proceed. Optionally, the images/ directory should contain the product.img file, without which only the packages for a Minimal installation will be available during the package group selection stage (refer to Section 16.19, "Package Group Selection" ). Important install.img and product.img must be the only files in the images/ directory. Ensure that an entry for the publicly available directory exists in the /etc/exports file on the network server so that the directory is available via NFS. To export a directory read-only to a specific system, use: To export a directory read-only to all systems, use: On the network server, start the NFS daemon (on a Red Hat Enterprise Linux system, use /sbin/service nfs start ). If NFS is already running, reload the configuration file (on a Red Hat Enterprise Linux system use /sbin/service nfs reload ). Be sure to test the NFS share following the directions in the Red Hat Enterprise Linux Deployment Guide . Refer to your NFS documentation for details on starting and stopping the NFS server. Note anaconda has the ability to test the integrity of the installation media. It works with the DVD, hard drive ISO, and NFS ISO installation methods. We recommend that you test all installation media before starting the installation process, and before reporting any installation-related bugs (many of the bugs reported are actually due to improperly-burned DVDs). To use this test, type the following command at the boot: prompt:
[ "linux mediacheck", "dd if=/dev/ dvd of=/ path_to_image / name_of_image .iso", "mv / path_to_image / name_of_image .iso / publicly_available_directory /", "sha256sum name_of_image .iso", "mount -t iso9660 / path_to_image / name_of_image .iso / mount_point -o loop,ro cp -pr / mount_point /images / publicly_available_directory / umount / mount_point", "mount -t iso9660 /var/isos/RHEL6.iso /mnt/tmp -o loop,ro cp -pr /mnt/tmp/images /var/isos/ umount /mnt/tmp", "/publicly_available_directory client.ip.address (ro)", "/publicly_available_directory * (ro)", "linux mediacheck" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-Preparing-ppc
Managing and monitoring security updates
Managing and monitoring security updates Red Hat Enterprise Linux 9 Update RHEL 9 system security to prevent attackers from exploiting known flaws Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_and_monitoring_security_updates/index
Chapter 14. Using bound service account tokens
Chapter 14. Using bound service account tokens You can use bound service account tokens, which improves the ability to integrate with cloud provider identity access management (IAM) services, such as AWS IAM. 14.1. About bound service account tokens You can use bound service account tokens to limit the scope of permissions for a given service account token. These tokens are audience and time-bound. This facilitates the authentication of a service account to an IAM role and the generation of temporary credentials mounted to a pod. You can request bound service account tokens by using volume projection and the TokenRequest API. 14.2. Configuring bound service account tokens using volume projection You can configure pods to request bound service account tokens by using volume projection. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have created a service account. This procedure assumes that the service account is named build-robot . Procedure Optional: Set the service account issuer. This step is typically not required if the bound tokens are used only within the cluster. Important If you change the service account issuer to a custom one, the service account issuer is still trusted for the 24 hours. You can force all holders to request a new bound token either by manually restarting all pods in the cluster or by performing a rolling node restart. Before performing either action, wait for a new revision of the Kubernetes API server pods to roll out with your service account issuer changes. Edit the cluster Authentication object: USD oc edit authentications cluster Set the spec.serviceAccountIssuer field to the desired service account issuer value: spec: serviceAccountIssuer: https://test.default.svc 1 1 This value should be a URL from which the recipient of a bound token can source the public keys necessary to verify the signature of the token. The default is https://kubernetes.default.svc . Save the file to apply the changes. Wait for a new revision of the Kubernetes API server pods to roll out. It can take several minutes for all nodes to update to the new revision. Run the following command: USD oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}' Review the NodeInstallerProgressing status condition for the Kubernetes API server to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update: AllNodesAtLatestRevision 3 nodes are at revision 12 1 1 In this example, the latest revision number is 12 . If the output shows a message similar to one of the following messages, the update is still in progress. Wait a few minutes and try again. 3 nodes are at revision 11; 0 nodes have achieved new revision 12 2 nodes are at revision 11; 1 nodes are at revision 12 Optional: Force the holder to request a new bound token either by performing a rolling node restart or by manually restarting all pods in the cluster. Perform a rolling node restart: Warning It is not recommended to perform a rolling node restart if you have custom workloads running on your cluster, because it can cause a service interruption. Instead, manually restart all pods in the cluster. Restart nodes sequentially. Wait for the node to become fully available before restarting the node. See Rebooting a node gracefully for instructions on how to drain, restart, and mark a node as schedulable again. Manually restart all pods in the cluster: Warning Be aware that running this command causes a service interruption, because it deletes every running pod in every namespace. These pods will automatically restart after they are deleted. Run the following command: USD for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{"\n"} {end}'); \ do oc delete pods --all -n USDI; \ sleep 1; \ done Configure a pod to use a bound service account token by using volume projection. Create a file called pod-projected-svc-token.yaml with the following contents: apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token serviceAccountName: build-robot 1 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 2 expirationSeconds: 7200 3 audience: vault 4 1 A reference to an existing service account. 2 The path relative to the mount point of the file to project the token into. 3 Optionally set the expiration of the service account token, in seconds. The default is 3600 seconds (1 hour) and must be at least 600 seconds (10 minutes). The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours. 4 Optionally set the intended audience of the token. The recipient of a token should verify that the recipient identity matches the audience claim of the token, and should otherwise reject the token. The audience defaults to the identifier of the API server. Note In order to prevent unexpected failure, OpenShift Container Platform overrides the expirationSeconds value to be one year from the initial token generation with the --service-account-extend-token-expiration default of true . You cannot change this setting. Create the pod: USD oc create -f pod-projected-svc-token.yaml The kubelet requests and stores the token on behalf of the pod, makes the token available to the pod at a configurable file path, and refreshes the token as it approaches expiration. The application that uses the bound token must handle reloading the token when it rotates. The kubelet rotates the token if it is older than 80 percent of its time to live, or if the token is older than 24 hours. 14.3. Creating bound service account tokens outside the pod Prerequisites You have access to the cluster as a user with the cluster-admin role. You have created a service account. This procedure assumes that the service account is named build-robot . Procedure Create the bound service account token outside the pod by running the following command: USD oc create token build-robot Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRWmFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6Ly9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGVzdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4LTMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCfojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJBXdDHNww0E5XOypmffYkfkadli8lN5QQD-MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihnt_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_M0-faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHLNnHghhU1LaRpoFzH7OUarqX9SGQ Additional resources Rebooting a node gracefully Creating service accounts
[ "oc edit authentications cluster", "spec: serviceAccountIssuer: https://test.default.svc 1", "oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'", "AllNodesAtLatestRevision 3 nodes are at revision 12 1", "for I in USD(oc get ns -o jsonpath='{range .items[*]} {.metadata.name}{\"\\n\"} {end}'); do oc delete pods --all -n USDI; sleep 1; done", "apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx name: nginx volumeMounts: - mountPath: /var/run/secrets/tokens name: vault-token serviceAccountName: build-robot 1 volumes: - name: vault-token projected: sources: - serviceAccountToken: path: vault-token 2 expirationSeconds: 7200 3 audience: vault 4", "oc create -f pod-projected-svc-token.yaml", "oc create token build-robot", "eyJhbGciOiJSUzI1NiIsImtpZCI6IkY2M1N4MHRvc2xFNnFSQlA4eG9GYzVPdnN3NkhIV0tRWmFrUDRNcWx4S0kifQ.eyJhdWQiOlsiaHR0cHM6Ly9pc3N1ZXIyLnRlc3QuY29tIiwiaHR0cHM6Ly9pc3N1ZXIxLnRlc3QuY29tIiwiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY3OTU0MzgzMCwiaWF0IjoxNjc5NTQwMjMwLCJpc3MiOiJodHRwczovL2lzc3VlcjIudGVzdC5jb20iLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImRlZmF1bHQiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoidGVzdC1zYSIsInVpZCI6ImM3ZjA4MjkwLWIzOTUtNGM4NC04NjI4LTMzMTM1NTVhNWY1OSJ9fSwibmJmIjoxNjc5NTQwMjMwLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDp0ZXN0LXNhIn0.WyAOPvh1BFMUl3LNhBCrQeaB5wSynbnCfojWuNNPSilT4YvFnKibxwREwmzHpV4LO1xOFZHSi6bXBOmG_o-m0XNDYL3FrGHd65mymiFyluztxa2lgHVxjw5reIV5ZLgNSol3Y8bJqQqmNg3rtQQWRML2kpJBXdDHNww0E5XOypmffYkfkadli8lN5QQD-MhsCbiAF8waCYs8bj6V6Y7uUKTcxee8sCjiRMVtXKjQtooERKm-CH_p57wxCljIBeM89VdaR51NJGued4hVV5lxvVrYZFu89lBEAq4oyQN_d6N1vBWGXQMyoihnt_fQjn-NfnlJWk-3NSZDIluDJAv7e-MTEk3geDrHVQKNEzDei2-Un64hSzb-n1g1M0Vn0885wQBQAePC9UlZm8YZlMNk1tq6wIUKQTMv3HPfi5HtBRqVc2eVs0EfMX4-x-PHhPCasJ6qLJWyj6DvyQ08dP4DW_TWZVGvKlmId0hzwpg59TTcLR0iCklSEJgAVEEd13Aa_M0-faD11L3MhUGxw0qxgOsPczdXUsolSISbefs7OKymzFSIkTAn9sDQ8PHMOsuyxsK8vzfrR-E0z7MAeguZ2kaIY7cZqbN6WFy0caWgx46hrKem9vCKALefElRYbCg3hcBmowBcRTOqaFHLNnHghhU1LaRpoFzH7OUarqX9SGQ" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/authentication_and_authorization/bound-service-account-tokens
Appendix B. Contact information
Appendix B. Contact information Red Hat Decision Manager documentation team: [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/release_notes_for_red_hat_decision_manager_7.13/author-group
A.2. Wake-ups
A.2. Wake-ups Many applications scan configuration files for changes. In many cases, the scan is performed at a fixed interval, for example, every minute. This can be a problem, because it forces a disk to wake up from spindowns. The best solution is to find a good interval, a good checking mechanism, or to check for changes with inotify and react to events. Inotify can check variety of changes on a file or a directory. For example: #include <stdio.h> #include <stdlib.h> #include <sys/time.h> #include <sys/types.h> #include <sys/inotify.h> #include <unistd.h> int main(int argc, char *argv[]) { int fd; int wd; int retval; struct timeval tv; fd = inotify_init(); /* checking modification of a file - writing into */ wd = inotify_add_watch(fd, "./myConfig", IN_MODIFY); if (wd < 0) { printf("inotify cannot be used\n"); /* switch back to checking */ } fd_set rfds; FD_ZERO(&rfds); FD_SET(fd, &rfds); tv.tv_sec = 5; tv.tv_usec = 0; retval = select(fd + 1, &rfds, NULL, NULL, &tv); if (retval == -1) perror("select()"); else if (retval) { printf("file was modified\n"); } else printf("timeout\n"); return EXIT_SUCCESS; } The advantage of this approach is the variety of checks that you can perform. The main limitation is that only a limited number of watches are available on a system. The number can be obtained from /proc/sys/fs/inotify/max_user_watches and although it can be changed, this is not recommended. Furthermore, in case inotify fails, the code has to fall back to a different check method, which usually means many occurrences of #if #define in the source code. For more information on inotify , see the inotify(7) man page.
[ "#include <stdio.h> #include <stdlib.h> #include <sys/time.h> #include <sys/types.h> #include <sys/inotify.h> #include <unistd.h> int main(int argc, char *argv[]) { int fd; int wd; int retval; struct timeval tv; fd = inotify_init(); /* checking modification of a file - writing into */ wd = inotify_add_watch(fd, \"./myConfig\", IN_MODIFY); if (wd < 0) { printf(\"inotify cannot be used\\n\"); /* switch back to previous checking */ } fd_set rfds; FD_ZERO(&rfds); FD_SET(fd, &rfds); tv.tv_sec = 5; tv.tv_usec = 0; retval = select(fd + 1, &rfds, NULL, NULL, &tv); if (retval == -1) perror(\"select()\"); else if (retval) { printf(\"file was modified\\n\"); } else printf(\"timeout\\n\"); return EXIT_SUCCESS; }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/developer_tips-wake-ups
1.4. Command Line Administration Tools
1.4. Command Line Administration Tools In addition to Conga and the system-config-cluster Cluster Administration GUI, command line tools are available for administering the cluster infrastructure and the high-availability service management components. The command line tools are used by the Cluster Administration GUI and init scripts supplied by Red Hat. Table 1.1, "Command Line Tools" summarizes the command line tools. Table 1.1. Command Line Tools Command Line Tool Used With Purpose ccs_tool - Cluster Configuration System Tool Cluster Infrastructure ccs_tool is a program for making online updates to the cluster configuration file. It provides the capability to create and modify cluster infrastructure components (for example, creating a cluster, adding and removing a node). For more information about this tool, refer to the ccs_tool(8) man page. cman_tool - Cluster Management Tool Cluster Infrastructure cman_tool is a program that manages the CMAN cluster manager. It provides the capability to join a cluster, leave a cluster, kill a node, or change the expected quorum votes of a node in a cluster. cman_tool is available with DLM clusters only. For more information about this tool, refer to the cman_tool(8) man page. gulm_tool - Cluster Management Tool Cluster Infrastructure gulm_tool is a program used to manage GULM. It provides an interface to lock_gulmd , the GULM lock manager. gulm_tool is available with GULM clusters only. For more information about this tool, refer to the gulm_tool(8) man page. fence_tool - Fence Tool Cluster Infrastructure fence_tool is a program used to join or leave the default fence domain. Specifically, it starts the fence daemon ( fenced ) to join the domain and kills fenced to leave the domain. fence_tool is available with DLM clusters only. For more information about this tool, refer to the fence_tool(8) man page. clustat - Cluster Status Utility High-availability Service Management Components The clustat command displays the status of the cluster. It shows membership information, quorum view, and the state of all configured user services. For more information about this tool, refer to the clustat(8) man page. clusvcadm - Cluster User Service Administration Utility High-availability Service Management Components The clusvcadm command allows you to enable, disable, relocate, and restart high-availability services in a cluster. For more information about this tool, refer to the clusvcadm(8) man page.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-cmdlinetools-overview-ca
Chapter 6. Hot deployment vs manual deployment
Chapter 6. Hot deployment vs manual deployment Abstract Fuse provides two different approaches for deploying files: hot deployment or manual deployment. If you need to deploy a collection of related bundles it is recommended that you deploy them together as a feature , rather than singly (see Chapter 9, Deploying Features ). 6.1. Hot Deployment 6.1.1. Hot deploy directory Fuse monitors files in the FUSE_HOME/deploy directory and hot deploys everything in this directory. Each time a file is copied to this directory, it is installed in the runtime and started. You can subsequently update or delete the files in the FUSE_HOME/deploy directory, and the changes are handled automatically. For example, if you have just built the bundle, ProjectDir /target/foo-1.0-SNAPSHOT.jar , you can deploy this bundle by copying it to the FUSE_HOME /deploy directory as follows (assuming you are working on a UNIX platform): 6.2. Hot undeploying a bundle To undeploy a bundle from the hot deploy directory, simply delete the bundle file from the FUSE_HOME/deploy directory while the Apache Karaf container is running . Important The hot undeploy mechanism does not work while the container is shut down. If you shut down the Karaf container, delete the bundle file from FUSE_HOME/deploy directory, and then restart the Karaf container, the bundle will not be undeployed after you restart the container. You can also undeploy a bundle by using the bundle:uninstall console command. 6.3. Manual Deployment 6.3.1. Overview You can manually deploy and undeploy bundles by issuing commands at the Fuse console. 6.3.2. Installing a bundle Use the bundle:install command to install one or more bundles in the OSGi container. This command has the following syntax: Where UrlList is a whitespace-separated list of URLs that specify the location of each bundle to deploy. The following command arguments are supported: -s Start the bundle after installing. --start Same as -s . --help Show and explain the command syntax. For example, to install and start the bundle, ProjectDir /target/foo-1.0-SNAPSHOT.jar , enter the following command at the Karaf console prompt: Note On Windows platforms, you must be careful to use the correct syntax for the file URL in this command. See Section 15.1, "File URL Handler" for details. 6.3.3. Uninstalling a bundle To uninstall a bundle, you must first obtain its bundle ID using the bundle:list command. You can then uninstall the bundle using the bundle:uninstall command (which takes the bundle ID as its argument). For example, if you have already installed the bundle named A Camel OSGi Service Unit , entering bundle:list at the console prompt might produce output like the following: You can now uninstall the bundle with the ID, 181 , by entering the following console command: 6.3.4. URL schemes for locating bundles When specifying the location URL to the bundle:install command, you can use any of the URL schemes supported by Fuse, which includes the following scheme types: Section 15.1, "File URL Handler" . Section 15.2, "HTTP URL Handler" . Section 15.3, "Mvn URL Handler" . 6.4. Redeploying bundles automatically using bundle:watch In a development environment-where a developer is constantly changing and rebuilding a bundle-it is typically necessary to re-install the bundle multiple times. Using the bundle:watch command, you can instruct Karaf to monitor your local Maven repository and re-install a particular bundle automatically, as soon as it changes in your local Maven repository. For example, given a particular bundle-with bundle ID, 751 -you can enable automatic redeployment by entering the command: Now, whenever you rebuild and install the Maven artifact into your local Maven repository (for example, by executing mvn install in your Maven project), the Karaf container automatically re-installs the changed Maven artifact. For more details, see Apache Karaf Console Reference . Important Using the bundle:watch command is intended for a development environment only. It is not recommended for use in a production environment.
[ "% cp ProjectDir /target/foo-1.0-SNAPSHOT.jar FUSE_HOME/deploy", "bundle:install [-s] [--start] [--help] UrlList", "bundle:install -s file: ProjectDir /target/foo-1.0-SNAPSHOT.jar", "[ 181] [Resolved ] [ ] [ ] [ 60] A Camel OSGi Service Unit (1.0.0.SNAPSHOT)", "bundle:uninstall 181", "bundle:watch 751" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/DeployBundle
Chapter 15. Network flows format reference
Chapter 15. Network flows format reference These are the specifications for network flows format, used both internally and when exporting flows to Kafka. 15.1. Network Flows format reference This is the specification of the network flows format. That format is used when a Kafka exporter is configured, for Prometheus metrics labels as well as internally for the Loki store. The "Filter ID" column shows which related name to use when defining Quick Filters (see spec.consolePlugin.quickFilters in the FlowCollector specification). The "Loki label" column is useful when querying Loki directly: label fields need to be selected using stream selectors . The "Cardinality" column gives information about the implied metric cardinality if this field was to be used as a Prometheus label with the FlowMetrics API. Refer to the FlowMetrics documentation for more information on using this API. Name Type Description Filter ID Loki label Cardinality OpenTelemetry Bytes number Number of bytes n/a no avoid bytes DnsErrno number Error number returned from DNS tracker ebpf hook function dns_errno no fine dns.errno DnsFlags number DNS flags for DNS record n/a no fine dns.flags DnsFlagsResponseCode string Parsed DNS header RCODEs name dns_flag_response_code no fine dns.responsecode DnsId number DNS record id dns_id no avoid dns.id DnsLatencyMs number Time between a DNS request and response, in milliseconds dns_latency no avoid dns.latency Dscp number Differentiated Services Code Point (DSCP) value dscp no fine dscp DstAddr string Destination IP address (ipv4 or ipv6) dst_address no avoid destination.address DstK8S_HostIP string Destination node IP dst_host_address no fine destination.k8s.host.address DstK8S_HostName string Destination node name dst_host_name no fine destination.k8s.host.name DstK8S_Name string Name of the destination Kubernetes object, such as Pod name, Service name or Node name. dst_name no careful destination.k8s.name DstK8S_Namespace string Destination namespace dst_namespace yes fine destination.k8s.namespace.name DstK8S_NetworkName string Destination network name dst_network no fine n/a DstK8S_OwnerName string Name of the destination owner, such as Deployment name, StatefulSet name, etc. dst_owner_name yes fine destination.k8s.owner.name DstK8S_OwnerType string Kind of the destination owner, such as Deployment, StatefulSet, etc. dst_kind no fine destination.k8s.owner.kind DstK8S_Type string Kind of the destination Kubernetes object, such as Pod, Service or Node. dst_kind yes fine destination.k8s.kind DstK8S_Zone string Destination availability zone dst_zone yes fine destination.zone DstMac string Destination MAC address dst_mac no avoid destination.mac DstPort number Destination port dst_port no careful destination.port DstSubnetLabel string Destination subnet label dst_subnet_label no fine n/a Duplicate boolean Indicates if this flow was also captured from another interface on the same host n/a no fine n/a Flags string[] List of TCP flags comprised in the flow, according to RFC-9293, with additional custom flags to represent the following per-packet combinations: - SYN_ACK - FIN_ACK - RST_ACK tcp_flags no careful tcp.flags FlowDirection number Flow interpreted direction from the node observation point. Can be one of: - 0: Ingress (incoming traffic, from the node observation point) - 1: Egress (outgoing traffic, from the node observation point) - 2: Inner (with the same source and destination node) node_direction yes fine host.direction IcmpCode number ICMP code icmp_code no fine icmp.code IcmpType number ICMP type icmp_type no fine icmp.type IfDirections number[] Flow directions from the network interface observation point. Can be one of: - 0: Ingress (interface incoming traffic) - 1: Egress (interface outgoing traffic) ifdirections no fine interface.directions Interfaces string[] Network interfaces interfaces no careful interface.names K8S_ClusterName string Cluster name or identifier cluster_name yes fine k8s.cluster.name K8S_FlowLayer string Flow layer: 'app' or 'infra' flow_layer yes fine k8s.layer NetworkEvents object[] Network events, such as network policy actions, composed of nested fields: - Feature (such as "acl" for network policies) - Type (such as an "AdminNetworkPolicy") - Namespace (namespace where the event applies, if any) - Name (name of the resource that triggered the event) - Action (such as "allow" or "drop") - Direction (Ingress or Egress) network_events no avoid n/a Packets number Number of packets pkt_drop_cause no avoid packets PktDropBytes number Number of bytes dropped by the kernel n/a no avoid drops.bytes PktDropLatestDropCause string Latest drop cause pkt_drop_cause no fine drops.latestcause PktDropLatestFlags number TCP flags on last dropped packet n/a no fine drops.latestflags PktDropLatestState string TCP state on last dropped packet pkt_drop_state no fine drops.lateststate PktDropPackets number Number of packets dropped by the kernel n/a no avoid drops.packets Proto number L4 protocol protocol no fine protocol Sampling number Sampling rate used for this flow n/a no fine n/a SrcAddr string Source IP address (ipv4 or ipv6) src_address no avoid source.address SrcK8S_HostIP string Source node IP src_host_address no fine source.k8s.host.address SrcK8S_HostName string Source node name src_host_name no fine source.k8s.host.name SrcK8S_Name string Name of the source Kubernetes object, such as Pod name, Service name or Node name. src_name no careful source.k8s.name SrcK8S_Namespace string Source namespace src_namespace yes fine source.k8s.namespace.name SrcK8S_NetworkName string Source network name src_network no fine n/a SrcK8S_OwnerName string Name of the source owner, such as Deployment name, StatefulSet name, etc. src_owner_name yes fine source.k8s.owner.name SrcK8S_OwnerType string Kind of the source owner, such as Deployment, StatefulSet, etc. src_kind no fine source.k8s.owner.kind SrcK8S_Type string Kind of the source Kubernetes object, such as Pod, Service or Node. src_kind yes fine source.k8s.kind SrcK8S_Zone string Source availability zone src_zone yes fine source.zone SrcMac string Source MAC address src_mac no avoid source.mac SrcPort number Source port src_port no careful source.port SrcSubnetLabel string Source subnet label src_subnet_label no fine n/a TimeFlowEndMs number End timestamp of this flow, in milliseconds n/a no avoid timeflowend TimeFlowRttNs number TCP Smoothed Round Trip Time (SRTT), in nanoseconds time_flow_rtt no avoid tcp.rtt TimeFlowStartMs number Start timestamp of this flow, in milliseconds n/a no avoid timeflowstart TimeReceived number Timestamp when this flow was received and processed by the flow collector, in seconds n/a no avoid timereceived Udns string[] List of User Defined Networks udns no careful n/a XlatDstAddr string Packet translation destination address xlat_dst_address no avoid n/a XlatDstPort number Packet translation destination port xlat_dst_port no careful n/a XlatSrcAddr string Packet translation source address xlat_src_address no avoid n/a XlatSrcPort number Packet translation source port xlat_src_port no careful n/a ZoneId number Packet translation zone id xlat_zone_id no avoid n/a _HashId string In conversation tracking, the conversation identifier id no avoid n/a _RecordType string Type of record: flowLog for regular flow logs, or newConnection , heartbeat , endConnection for conversation tracking type yes fine n/a
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_observability/json-flows-format-reference
Appendix A. Versioning information
Appendix A. Versioning information Documentation last updated on Thursday, March 14th, 2024.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/release_notes_for_red_hat_decision_manager_7.13/versioning-information
Part I. Satellite overview and concepts
Part I. Satellite overview and concepts Red Hat Satellite is a centralized tool for provisioning, remote management, and monitoring of multiple Red Hat Enterprise Linux deployments. With Satellite, you can deploy, configure, and maintain your systems across physical, virtual, and cloud environments.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/overview_concepts_and_deployment_considerations/project_overview_and_concepts_planning
Chapter 59. port
Chapter 59. port This chapter describes the commands under the port command. 59.1. port create Create a new port Usage: Table 59.1. Positional arguments Value Summary <name> Name of this port Table 59.2. Command arguments Value Summary -h, --help Show this help message and exit --network <network> Network this port belongs to (name or id) --description <description> Description of this port --device <device-id> Port device id --mac-address <mac-address> Mac address of this port (admin only) --device-owner <device-owner> Device owner of this port. this is the entity that uses the port (for example, network:dhcp). --vnic-type <vnic-type> Vnic type for this port (direct | direct-physical | macvtap | normal | baremetal | virtio-forwarder | vdpa, default: normal) --host <host-id> Allocate port on host <host-id> (id only) --dns-domain dns-domain Set dns domain to this port (requires dns_domain extension for ports) --dns-name <dns-name> Set dns name for this port (requires dns integration extension) --numa-policy-required Numa affinity policy required to schedule this port --numa-policy-preferred Numa affinity policy preferred to schedule this port --numa-policy-legacy Numa affinity policy using legacy mode to schedule this port --fixed-ip subnet=<subnet>,ip-address=<ip-address> Desired ip and/or subnet for this port (name or id): subnet=<subnet>,ip-address=<ip-address> (repeat option to set multiple fixed IP addresses) --no-fixed-ip No ip or subnet for this port. --binding-profile <binding-profile> Custom data to be passed as binding:profile. data may be passed as <key>=<value> or JSON. (repeat option to set multiple binding:profile data) --enable Enable port (default) --disable Disable port --enable-uplink-status-propagation Enable uplink status propagate --disable-uplink-status-propagation Disable uplink status propagate (default) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --extra-dhcp-option name=<name>[,value=<value>,ip-version={4,6}] Extra dhcp options to be assigned to this port: name=<name>[,value=<value>,ip-version={4,6}] (repeat option to set multiple extra DHCP options) --security-group <security-group> Security group to associate with this port (name or ID) (repeat option to set multiple security groups) --no-security-group Associate no security groups with this port --qos-policy <qos-policy> Attach qos policy to this port (name or id) --enable-port-security Enable port security for this port (default) --disable-port-security Disable port security for this port --allowed-address ip-address=<ip-address>[,mac-address=<mac-address>] Add allowed-address pair associated with this port: ip-address=<ip-address>[,mac-address=<mac-address>] (repeat option to set multiple allowed-address pairs) --device-profile <device-profile> Cyborg port device profile --tag <tag> Tag to be added to the port (repeat option to set multiple tags) --no-tag No tags associated with the port Table 59.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 59.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 59.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 59.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 59.2. port delete Delete port(s) Usage: Table 59.7. Positional arguments Value Summary <port> Port(s) to delete (name or id) Table 59.8. Command arguments Value Summary -h, --help Show this help message and exit 59.3. port list List ports Usage: Table 59.9. Command arguments Value Summary -h, --help Show this help message and exit --device-owner <device-owner> List only ports with the specified device owner. this is the entity that uses the port (for example, network:dhcp). --host <host-id> List only ports bound to this host id --network <network> List only ports connected to this network (name or id) --router <router> List only ports attached to this router (name or id) --server <server> List only ports attached to this server (name or id) --device-id <device-id> List only ports with the specified device id --mac-address <mac-address> List only ports with this mac address --long List additional fields in output --project <project> List ports according to their project (name or id) --name <name> List ports according to their name --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --fixed-ip subnet=<subnet>,ip-address=<ip-address>,ip-substring=<ip-substring> Desired ip and/or subnet for filtering ports (name or ID): subnet=<subnet>,ip-address=<ip-address>,ip- substring=<ip-substring> (repeat option to set multiple fixed IP addresses) --tags <tag>[,<tag>,... ] List ports which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List ports which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude ports which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude ports which have any given tag(s) (comma- separated list of tags) Table 59.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 59.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 59.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 59.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 59.4. port set Set port properties Usage: Table 59.14. Positional arguments Value Summary <port> Port to modify (name or id) Table 59.15. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description of this port --device <device-id> Port device id --mac-address <mac-address> Mac address of this port (admin only) --device-owner <device-owner> Device owner of this port. this is the entity that uses the port (for example, network:dhcp). --vnic-type <vnic-type> Vnic type for this port (direct | direct-physical | macvtap | normal | baremetal | virtio-forwarder | vdpa, default: normal) --host <host-id> Allocate port on host <host-id> (id only) --dns-domain dns-domain Set dns domain to this port (requires dns_domain extension for ports) --dns-name <dns-name> Set dns name for this port (requires dns integration extension) --numa-policy-required Numa affinity policy required to schedule this port --numa-policy-preferred Numa affinity policy preferred to schedule this port --numa-policy-legacy Numa affinity policy using legacy mode to schedule this port --enable Enable port --disable Disable port --name <name> Set port name --fixed-ip subnet=<subnet>,ip-address=<ip-address> Desired ip and/or subnet for this port (name or id): subnet=<subnet>,ip-address=<ip-address> (repeat option to set multiple fixed IP addresses) --no-fixed-ip Clear existing information of fixed ip addresses.Specify both --fixed-ip and --no-fixed-ip to overwrite the current fixed IP addresses. --binding-profile <binding-profile> Custom data to be passed as binding:profile. data may be passed as <key>=<value> or JSON. (repeat option to set multiple binding:profile data) --no-binding-profile Clear existing information of binding:profile. specify both --binding-profile and --no-binding-profile to overwrite the current binding:profile information. --qos-policy <qos-policy> Attach qos policy to this port (name or id) --security-group <security-group> Security group to associate with this port (name or ID) (repeat option to set multiple security groups) --no-security-group Clear existing security groups associated with this port --enable-port-security Enable port security for this port --disable-port-security Disable port security for this port --allowed-address ip-address=<ip-address>[,mac-address=<mac-address>] Add allowed-address pair associated with this port: ip-address=<ip-address>[,mac-address=<mac-address>] (repeat option to set multiple allowed-address pairs) --no-allowed-address Clear existing allowed-address pairs associated with this port. (Specify both --allowed-address and --no- allowed-address to overwrite the current allowed- address pairs) --extra-dhcp-option name=<name>[,value=<value>,ip-version={4,6}] Extra dhcp options to be assigned to this port: name=<name>[,value=<value>,ip-version={4,6}] (repeat option to set multiple extra DHCP options) --data-plane-status <status> Set data plane status of this port (active | down). Unset it to None with the port unset command (requires data plane status extension) --tag <tag> Tag to be added to the port (repeat option to set multiple tags) --no-tag Clear tags associated with the port. specify both --tag and --no-tag to overwrite current tags 59.5. port show Display port details Usage: Table 59.16. Positional arguments Value Summary <port> Port to display (name or id) Table 59.17. Command arguments Value Summary -h, --help Show this help message and exit Table 59.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 59.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 59.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 59.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 59.6. port unset Unset port properties Usage: Table 59.22. Positional arguments Value Summary <port> Port to modify (name or id) Table 59.23. Command arguments Value Summary -h, --help Show this help message and exit --fixed-ip subnet=<subnet>,ip-address=<ip-address> Desired ip and/or subnet which should be removed from this port (name or ID): subnet=<subnet>,ip- address=<ip-address> (repeat option to unset multiple fixed IP addresses) --binding-profile <binding-profile-key> Desired key which should be removed from binding:profile (repeat option to unset multiple binding:profile data) --security-group <security-group> Security group which should be removed this port (name or ID) (repeat option to unset multiple security groups) --allowed-address ip-address=<ip-address>[,mac-address=<mac-address>] Desired allowed-address pair which should be removed from this port: ip-address=<ip-address>[,mac- address=<mac-address>] (repeat option to unset multiple allowed-address pairs) --qos-policy Remove the qos policy attached to the port --data-plane-status Clear existing information of data plane status --numa-policy Clear existing numa affinity policy --tag <tag> Tag to be removed from the port (repeat option to remove multiple tags) --all-tag Clear all tags associated with the port
[ "openstack port create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --network <network> [--description <description>] [--device <device-id>] [--mac-address <mac-address>] [--device-owner <device-owner>] [--vnic-type <vnic-type>] [--host <host-id>] [--dns-domain dns-domain] [--dns-name <dns-name>] [--numa-policy-required | --numa-policy-preferred | --numa-policy-legacy] [--fixed-ip subnet=<subnet>,ip-address=<ip-address> | --no-fixed-ip] [--binding-profile <binding-profile>] [--enable | --disable] [--enable-uplink-status-propagation | --disable-uplink-status-propagation] [--project <project>] [--project-domain <project-domain>] [--extra-dhcp-option name=<name>[,value=<value>,ip-version={4,6}]] [--security-group <security-group> | --no-security-group] [--qos-policy <qos-policy>] [--enable-port-security | --disable-port-security] [--allowed-address ip-address=<ip-address>[,mac-address=<mac-address>]] [--device-profile <device-profile>] [--tag <tag> | --no-tag] <name>", "openstack port delete [-h] <port> [<port> ...]", "openstack port list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--device-owner <device-owner>] [--host <host-id>] [--network <network>] [--router <router> | --server <server> | --device-id <device-id>] [--mac-address <mac-address>] [--long] [--project <project>] [--name <name>] [--project-domain <project-domain>] [--fixed-ip subnet=<subnet>,ip-address=<ip-address>,ip-substring=<ip-substring>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack port set [-h] [--description <description>] [--device <device-id>] [--mac-address <mac-address>] [--device-owner <device-owner>] [--vnic-type <vnic-type>] [--host <host-id>] [--dns-domain dns-domain] [--dns-name <dns-name>] [--numa-policy-required | --numa-policy-preferred | --numa-policy-legacy] [--enable | --disable] [--name <name>] [--fixed-ip subnet=<subnet>,ip-address=<ip-address>] [--no-fixed-ip] [--binding-profile <binding-profile>] [--no-binding-profile] [--qos-policy <qos-policy>] [--security-group <security-group>] [--no-security-group] [--enable-port-security | --disable-port-security] [--allowed-address ip-address=<ip-address>[,mac-address=<mac-address>]] [--no-allowed-address] [--extra-dhcp-option name=<name>[,value=<value>,ip-version={4,6}]] [--data-plane-status <status>] [--tag <tag>] [--no-tag] <port>", "openstack port show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <port>", "openstack port unset [-h] [--fixed-ip subnet=<subnet>,ip-address=<ip-address>] [--binding-profile <binding-profile-key>] [--security-group <security-group>] [--allowed-address ip-address=<ip-address>[,mac-address=<mac-address>]] [--qos-policy] [--data-plane-status] [--numa-policy] [--tag <tag> | --all-tag] <port>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/port
Chapter 1. Overview of Insights for Red Hat Enterprise Linux tasks
Chapter 1. Overview of Insights for Red Hat Enterprise Linux tasks Tasks is part of the Automation Toolkit for Insights for Red Hat Enterprise Linux. Tasks offer predefined playbooks that help you maintain the health of your infrastructure by simplifying and solving complex problems using automated tasks. Tasks solve specific problems and are typically executed one time on your systems to accomplish things such as, detecting a high-profile vulnerability on your systems or preparing systems for a major upgrade. You can find tasks in Red Hat Hybrid Cloud Console > Red Hat Insights > RHEL > Automation Toolkit > Tasks . Insights for Red Hat Enterprise Linux anticipates your need to solve problems and get things done in your infrastructure, and continuously adds specific tasks to the Automation Toolkit. Tasks include the following: RHEL pre-upgrade analysis utility task Pre-conversion analysis utility task Convert to RHEL from CentOS Linux 7 Some important information you will need to get started with executing a task are: User Access settings in the Red Hat Hybrid Cloud Console . Find out what role or level of user access you need to complete tasks. Registering and connecting systems to Red Hat Insights to execute tasks . Using tasks requires you to register and connect systems to Insights. Executing Tasks using Red Hat Insights . Understand how to find and execute tasks. Note System requirements to execute different tasks might vary. 1.1. User Access settings in the Red Hat Hybrid Cloud Console All users on your account have access to most of the data in Insights for Red Hat Enterprise Linux. 1.1.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 1.1.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. 1.1.2. User Access roles for Insights Tasks users The following role enables enhanced access to remediations features in Insights for Red Hat Enterprise Linux: Tasks administrator. The Tasks administrator role permits access to all Tasks capabilities to remotely execute Tasks on Insights-connected systems. Note All members of the Default Admin Access group can also execute Tasks. A Tasks viewer role does not exist.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_remediating_system_issues_using_red_hat_insights_tasks_with_fedramp/overview-tasks
Chapter 11. OpenShift Serverless support
Chapter 11. OpenShift Serverless support If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal at http://access.redhat.com . You can use the Red Hat Customer Portal to search or browse through the Red Hat Knowledgebase of technical support articles about Red Hat products. You can also submit a support case to Red Hat Global Support Services (GSS), or access other product documentation. If you have a suggestion for improving this guide or have found an error, you can submit a Jira issue for the most relevant documentation component. Provide specific details, such as the section number, guide name, and OpenShift Serverless version so we can easily locate the content. 11.1. About the Red Hat Knowledgebase The Red Hat Knowledgebase provides rich content aimed at helping you make the most of Red Hat's products and technologies. The Red Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps. 11.2. Searching the Red Hat Knowledgebase In the event of an OpenShift Container Platform issue, you can perform an initial search to determine if a solution already exists within the Red Hat Knowledgebase. Prerequisites You have a Red Hat Customer Portal account. Procedure Log in to the Red Hat Customer Portal . In the main Red Hat Customer Portal search field, input keywords and strings relating to the problem, including: OpenShift Container Platform components (such as etcd ) Related procedure (such as installation ) Warnings, error messages, and other outputs related to explicit failures Click Search . Select the OpenShift Container Platform product filter. Select the Knowledgebase content type filter. 11.3. Submitting a support case Prerequisites You have installed the OpenShift CLI ( oc ). You have a Red Hat Customer Portal account. You have access to OpenShift Cluster Manager . Procedure Log in to the Red Hat Customer Portal and select SUPPORT CASES Open a case . Select the appropriate category for your issue (such as Defect / Bug ), product ( OpenShift Container Platform ), and product version ( 4.9 , if this is not already autofilled). Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, click Continue . Enter a concise but descriptive problem summary and further details about the symptoms being experienced, as well as your expectations. Review the updated list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. The list is refined as you provide more information during the case creation process. If the suggested articles do not address the issue, click Continue . Ensure that the account information presented is as expected, and if not, amend accordingly. Check that the autofilled OpenShift Container Platform Cluster ID is correct. If it is not, manually obtain your cluster ID. To manually obtain your cluster ID using the OpenShift Container Platform web console: Navigate to Home Dashboards Overview . Find the value in the Cluster ID field of the Details section. Alternatively, it is possible to open a new support case through the OpenShift Container Platform web console and have your cluster ID autofilled. From the toolbar, navigate to (?) Help Open Support Case . The Cluster ID value is autofilled. To obtain your cluster ID using the OpenShift CLI ( oc ), run the following command: USD oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}' Complete the following questions where prompted and then click Continue : Where are you experiencing the behavior? What environment? When does the behavior occur? Frequency? Repeatedly? At certain times? What information can you provide around time-frames and the business impact? Upload relevant diagnostic data files and click Continue . It is recommended to include data gathered using the oc adm must-gather command as a starting point, plus any issue specific data that is not collected by that command. Input relevant case management details and click Continue . Preview the case details and click Submit . 11.4. Gathering diagnostic information for support When you open a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including data related to OpenShift Serverless. For prompt support, supply diagnostic information for both OpenShift Container Platform and OpenShift Serverless. 11.4.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... 11.4.2. About collecting OpenShift Serverless data You can use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with OpenShift Serverless. To collect OpenShift Serverless data with must-gather , you must specify the OpenShift Serverless image and the image tag for your installed version of OpenShift Serverless. Prerequisites Install the OpenShift CLI ( oc ). Procedure Collect data by using the oc adm must-gather command: USD oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8:<image_version_tag> Example command USD oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8:1.14.0
[ "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v4.9.0", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8:<image_version_tag>", "oc adm must-gather --image=registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8:1.14.0" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/serverless/serverless-support
Release Notes
Release Notes Red Hat JBoss Data Virtualization 6.4 Errata and late-breaking news for this release. Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/release_notes/index
Chapter 5. The WS-Policy Framework
Chapter 5. The WS-Policy Framework Abstract This chapter provides an introduction to the basic concepts of the WS-Policy framework, defining policy subjects and policy assertions, and explaining how policy assertions can be combined to make policy expressions. 5.1. Introduction to WS-Policy Overview The WS-Policy specification provides a general framework for applying policies that modify the semantics of connections and communications at runtime in a Web services application. Apache CXF security uses the WS-Policy framework to configure message protection and authentication requirements. Policies and policy references The simplest way to specify a policy is to embed it directly where you want to apply it. For example, to associate a policy with a specific port in the WSDL contract, you can specify it as follows: An alternative way to specify a policy is to insert a policy reference element, wsp:PolicyReference , at the point where you want to apply the policy and then insert the policy element, wsp:Policy , at some other point in the XML file. For example, to associate a policy with a specific port using a policy reference, you could use a configuration like the following: Where the policy reference, wsp:PolicyReference , locates the referenced policy using the ID, PolicyID (note the addition of the # prefix character in the URI attribute). The policy itself, wsp:Policy , must be identified by adding the attribute, wsu:Id=" PolicyID " . Policy subjects The entities with which policies are associated are called policy subjects . For example, you can associate a policy with an endpoint, in which case the endpoint is the policy subject. It is possible to associate multiple policies with any given policy subject. The WS-Policy framework supports the following kinds of policy subject: the section called "Service policy subject" . the section called "Endpoint policy subject" . the section called "Operation policy subject" . the section called "Message policy subject" . Service policy subject To associate a policy with a service, insert either a <wsp:Policy> element or a <wsp:PolicyReference> element as a sub-element of the following WSDL 1.1 element: wsdl:service -apply the policy to all of the ports (endpoints) offered by this service. Endpoint policy subject To associate a policy with an endpoint, insert either a <wsp:Policy> element or a <wsp:PolicyReference> element as a sub-element of any of the following WSDL 1.1 elements: wsdl:portType -apply the policy to all of the ports (endpoints) that use this port type. wsdl:binding -apply the policy to all of the ports that use this binding. wsdl:port -apply the policy to this endpoint only. For example, you can associate a policy with an endpoint binding as follows (using a policy reference): Operation policy subject To associate a policy with an operation, insert either a <wsp:Policy> element or a <wsp:PolicyReference> element as a sub-element of any of the following WSDL 1.1 elements: wsdl:portType/wsdl:operation wsdl:binding/wsdl:operation For example, you can associate a policy with an operation in a binding as follows (using a policy reference): Message policy subject To associate a policy with a message, insert either a <wsp:Policy> element or a <wsp:PolicyReference> element as a sub-element of any of the following WSDL 1.1 elements: wsdl:message wsdl:portType/wsdl:operation/wsdl:input wsdl:portType/wsdl:operation/wsdl:output wsdl:portType/wsdl:operation/wsdl:fault wsdl:binding/wsdl:operation/wsdl:input wsdl:binding/wsdl:operation/wsdl:output wsdl:binding/wsdl:operation/wsdl:fault For example, you can associate a policy with a message in a binding as follows (using a policy reference): 5.2. Policy Expressions Overview In general, a wsp:Policy element is composed of multiple different policy settings (where individual policy settings are specified as policy assertions ). Hence, the policy defined by a wsp:Policy element is really a composite object. The content of the wsp:Policy element is called a policy expression , where the policy expression consists of various logical combinations of the basic policy assertions. By tailoring the syntax of the policy expression, you can determine what combinations of policy assertions must be satisfied at runtime in order to satisfy the policy overall. This section describes the syntax and semantics of policy expressions in detail. Policy assertions Policy assertions are the basic building blocks that can be combined in various ways to produce a policy. A policy assertion has two key characteristics: it adds a basic unit of functionality to the policy subject and it represents a boolean assertion to be evaluated at runtime. For example, consider the following policy assertion that requires a WS-Security username token to be propagated with request messages: When associated with an endpoint policy subject, this policy assertion has the following effects: The Web service endpoint marshales/unmarshals the UsernameToken credentials. At runtime, the policy assertion returns true , if UsernameToken credentials are provided (on the client side) or received in the incoming message (on the server side); otherwise the policy assertion returns false . Note that if a policy assertion returns false , this does not necessarily result in an error. The net effect of a particular policy assertion depends on how it is inserted into a policy and on how it is combined with other policy assertions. Policy alternatives A policy is built up using policy assertions, which can additionally be qualified using the wsp:Optional attribute, and various nested combinations of the wsp:All and wsp:ExactlyOne elements. The net effect of composing these elements is to produce a range of acceptable policy alternatives . As long as one of these acceptable policy alternatives is satisfied, the overall policy is also satisified (evaluates to true ). wsp:All element When a list of policy assertions is wrapped by the wsp:All element, all of the policy assertions in the list must evaluate to true . For example, consider the following combination of authentication and authorization policy assertions: The preceding policy will be satisfied for a particular incoming request, if the following conditions both hold: WS-Security UsernameToken credentials must be present; and A SAML token must be present. Note The wsp:Policy element is semantically equivalent to wsp:All . Hence, if you removed the wsp:All element from the preceding example, you would obtain a semantically equivalent example wsp:ExactlyOne element When a list of policy assertions is wrapped by the wsp:ExactlyOne element, at least one of the policy assertions in the list must evaluate to true . The runtime goes through the list, evaluating policy assertions until it finds a policy assertion that returns true . At that point, the wsp:ExactlyOne expression is satisfied (returns true ) and any remaining policy assertions from the list will not be evaluated. For example, consider the following combination of authentication policy assertions: The preceding policy will be satisfied for a particular incoming request, if either of the following conditions hold: WS-Security UsernameToken credentials are present; or A SAML token is present. Note, in particular, that if both credential types are present, the policy would be satisfied after evaluating one of the assertions, but no guarantees can be given as to which of the policy assertions actually gets evaluated. The empty policy A special case is the empty policy , an example of which is shown in Example 5.1, "The Empty Policy" . Example 5.1. The Empty Policy Where the empty policy alternative, <wsp:All/> , represents an alternative for which no policy assertions need be satisfied. In other words, it always returns true . When <wsp:All/> is available as an alternative, the overall policy can be satisified even when no policy assertions are true . The null policy A special case is the null policy , an example of which is shown in Example 5.2, "The Null Policy" . Example 5.2. The Null Policy Where the null policy alternative, <wsp:ExactlyOne/> , represents an alternative that is never satisfied. In other words, it always returns false . Normal form In practice, by nesting the <wsp:All> and <wsp:ExactlyOne> elements, you can produce fairly complex policy expressions, whose policy alternatives might be difficult to work out. To facilitate the comparison of policy expressions, the WS-Policy specification defines a canonical or normal form for policy expressions, such that you can read off the list of policy alternatives unambiguously. Every valid policy expression can be reduced to the normal form. In general, a normal form policy expression conforms to the syntax shown in Example 5.3, "Normal Form Syntax" . Example 5.3. Normal Form Syntax Where each line of the form, <wsp:All>... </wsp:All> , represents a valid policy alternative. If one of these policy alternatives is satisfied, the policy is satisfied overall.
[ "<wsdl:definitions targetNamespace=\"http://tempuri.org/\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" ... > <wsdl:service name=\"PingService10\"> <wsdl:port name=\"UserNameOverTransport_IPingService\" binding=\" BindingName \"> <wsp:Policy> <!-- Policy expression comes here! --> </wsp:Policy> <soap:address location=\" SOAPAddress \"/> </wsdl:port> </wsdl:service> </wsdl:definitions>", "<wsdl:definitions targetNamespace=\"http://tempuri.org/\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" ... > <wsdl:service name=\"PingService10\"> <wsdl:port name=\"UserNameOverTransport_IPingService\" binding=\" BindingName \"> <wsp:PolicyReference URI=\"#PolicyID\"/> <soap:address location=\" SOAPAddress \"/> </wsdl:port> </wsdl:service> <wsp:Policy wsu:Id=\"PolicyID\" > <!-- Policy expression comes here ... --> </wsp:Policy> </wsdl:definitions>", "<wsdl:definitions targetNamespace=\"http://tempuri.org/\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" ... > <wsdl:binding name=\" EndpointBinding \" type=\"i0:IPingService\"> <wsp:PolicyReference URI=\"#PolicyID\"/> </wsdl:binding> <wsp:Policy wsu:Id=\"PolicyID\" > ... </wsp:Policy> </wsdl:definitions>", "<wsdl:definitions targetNamespace=\"http://tempuri.org/\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" ... > <wsdl:binding name=\" EndpointBinding \" type=\"i0:IPingService\"> <wsdl:operation name=\"Ping\"> <wsp:PolicyReference URI=\"#PolicyID\"/> <soap:operation soapAction=\"http://xmlsoap.org/Ping\" style=\"document\"/> <wsdl:input name=\"PingRequest\"> ... </wsdl:input> <wsdl:output name=\"PingResponse\"> ... </wsdl:output> </wsdl:operation> </wsdl:binding> <wsp:Policy wsu:Id=\"PolicyID\" > ... </wsp:Policy> </wsdl:definitions>", "<wsdl:definitions targetNamespace=\"http://tempuri.org/\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" ... > <wsdl:binding name=\" EndpointBinding \" type=\"i0:IPingService\"> <wsdl:operation name=\"Ping\"> <soap:operation soapAction=\"http://xmlsoap.org/Ping\" style=\"document\"/> <wsdl:input name=\"PingRequest\"> <wsp:PolicyReference URI=\"#PolicyID\"/> <soap:body use=\"literal\"/> </wsdl:input> <wsdl:output name=\"PingResponse\"> ... </wsdl:output> </wsdl:operation> </wsdl:binding> <wsp:Policy wsu:Id=\"PolicyID\" > ... </wsp:Policy> </wsdl:definitions>", "<sp:SupportingTokens xmlns:sp=\"http://schemas.xmlsoap.org/ws/2005/07/securitypolicy\"> <wsp:Policy> <sp:UsernameToken/> </wsp:Policy> </sp:SupportingTokens>", "<wsp:Policy wsu:Id=\"AuthenticateAndAuthorizeWSSUsernameTokenPolicy\"> <wsp:All> <sp:SupportingTokens> <wsp:Policy> <sp:UsernameToken/> </wsp:Policy> </sp:SupportingTokens> <sp:SupportingTokens> <wsp:Policy> <sp:SamlToken/> </wsp:Policy> </sp:SupportingTokens> </wsp:All> </wsp:Policy>", "<wsp:Policy wsu:Id=\"AuthenticateUsernamePasswordPolicy\"> <wsp:ExactlyOne> <sp:SupportingTokens> <wsp:Policy> <sp:UsernameToken/> </wsp:Policy> </sp:SupportingTokens> <sp:SupportingTokens> <wsp:Policy> <sp:SamlToken/> </wsp:Policy> </sp:SupportingTokens> </wsp:ExactlyOne> </wsp:Policy>", "<wsp:Policy ... > <wsp:ExactlyOne> <wsp:All/> </wsp:ExactlyOne> </wsp:Policy>", "<wsp:Policy ... > <wsp:ExactlyOne/> </wsp:Policy>", "<wsp:Policy ... > <wsp:ExactlyOne> <wsp:All> < Assertion .../> ... < Assertion .../> </wsp:All> <wsp:All> < Assertion .../> ... < Assertion .../> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_security_guide/WsPolicy
Chapter 11. Reviewing test results and completing certifications
Chapter 11. Reviewing test results and completing certifications 11.1. Red Hat review of test results After you submit your results, the review team will analyze their contents and award credit for each passing test that is part of the test plan. As they verify each passing test, the team sets each test plan item to Confirmed on the certification site's test plan, which you can see under the Results tab on the catalog. This allows you to see at a glance which tests are outstanding and which have been verified as passing. If any problems are found, the review team will update the certification request with a question, which will automatically be emailed to the person who submitted the cert. You can see all the discussion, and respond to or ask any questions, on the Dialog tab of the certification. 11.2. Completing certifications A certification is complete after Red Hat confirms that all the items in the test plan have passed. At this point, you can choose whether to close and publish the certification or to close and leave the certification unpublished. Supplemental certifications always remain unpublished. System and component certifications can be left unpublished if you do not want to advertise the certification status or the existence of the system or component. The system information and the discussions between you and the Red Hat review team will not be visible to the public in the published certification. If these publication options do not meet your requirements, submit a request for an exception while the certification is open, or a case if it is already closed.
null
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_hardware_certification_test_suite_user_guide/assembly_reviewing-test-results-and-completing-certifications_hw-test-suite-leveraging
5.3. Modifying a Tag
5.3. Modifying a Tag You can edit the name and description of a tag. Modifying a Tag Click the Tags icon ( ) in the header bar. Select the tag you want to modify and click Edit . Change the Name and Description fields as necessary. Click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/modifying_a_tag
Part IV. Interacting with Red Hat Decision Manager using KIE APIs
Part IV. Interacting with Red Hat Decision Manager using KIE APIs As a business rules developer or system administrator, you can use KIE APIs to interact with KIE Servers, KIE containers, and business assets in Red Hat Decision Manager. You can use the KIE Server REST API and Java client API to interact with KIE containers and business assets (such as business rules, processes, and solvers), the Process Automation Manager controller REST API and Java client API to interact with KIE Server templates and instances, and the Knowledge Store REST API to interact with spaces and projects in Business Central. REST API endpoints for KIE Server and the Process Automation Manager controller The lists of REST API endpoints for KIE Server and the Process Automation Manager controller are published separately from this document and maintained dynamically to ensure that endpoint options and data are as current as possible. Use this document to understand what the KIE Server and Process Automation Manager controller REST APIs enable you to do and how to use them, and use the separately maintained lists of REST API endpoints for specific endpoint details. For the full list of KIE Server REST API endpoints and descriptions, use one of the following resources: Execution Server REST API on the jBPM Documentation page (static) Swagger UI for the KIE Server REST API at http://SERVER:PORT/kie-server/docs (dynamic, requires running KIE Server) For the full list of Process Automation Manager controller REST API endpoints and descriptions, use one of the following resources: Controller REST API on the jBPM Documentation page (static) Swagger UI for the Process Automation Manager controller REST API at http://SERVER:PORT/CONTROLLER/docs (dynamic, requires running Process Automation Manager controller) Prerequisites Red Hat Decision Manager is installed and running. For installation and startup options, see Planning a Red Hat Decision Manager installation . You have access to Red Hat Decision Manager with the following user roles: kie-server : For access to KIE Server API capabilities, and access to headless Process Automation Manager controller API capabilities without Business Central (if applicable) rest-all : For access to Business Central API capabilities for the built-in Process Automation Manager controller and for the Business Central Knowledge Store admin : For full administrative access to Red Hat Decision Manager Although these user roles are not all required for every KIE API, consider acquiring all of them to ensure that you can access any KIE API without disruption. For more information about user roles, see Planning a Red Hat Decision Manager installation .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/assembly-kie-apis
1.8. Driver Connection URL Format
1.8. Driver Connection URL Format URLs used when establishing a connection using the driver class have the following format: Given this format, the following table describes the variable parts of the URL: Table 1.1. URL Entities Variable Name Description VDB-NAME The name of the virtual database (VDB) to which the application is connected. Important VDB names can contain version information; for example, myvdb.2 . If such a name is used in the URL, this has the same effect as supplying a version=2 connection property. Note that if the VDB name contains version information, you cannot also use the version property in the same request. mm[s] The JBoss Data Virtualization JDBC protocol. mm is the default for normal connections. mms uses SSL for encryption and is the default for the AdminAPI tools. HOSTNAME The server where JBoss Data Virtualization is installed. PORT The port on which JBoss Data Virtualization is listening for incoming JDBC connections. [prop-name=prop-value] Any number of additional name-value pairs can be supplied in the URL, separated by semi-colons. Property values must be URL encoded if they contain reserved characters, for example, ? , = , and ; .
[ "jdbc:teiid: VDB-NAME @ mm[s] :// HOSTNAME : PORT ; [prop-name=prop-value;] *" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/driver_connection_url_format1
Chapter 84. ExternalConfigurationEnv schema reference
Chapter 84. ExternalConfigurationEnv schema reference Used in: ExternalConfiguration Property Property type Description name string Name of the environment variable which will be passed to the Kafka Connect pods. The name of the environment variable cannot start with KAFKA_ or STRIMZI_ . valueFrom ExternalConfigurationEnvVarSource Value of the environment variable which will be passed to the Kafka Connect pods. It can be passed either as a reference to Secret or ConfigMap field. The field has to specify exactly one Secret or ConfigMap.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-externalconfigurationenv-reference
Chapter 4. Mirroring images for a disconnected installation using the oc-mirror plugin
Chapter 4. Mirroring images for a disconnected installation using the oc-mirror plugin Running your cluster in a restricted network without direct internet connectivity is possible by installing the cluster from a mirrored set of OpenShift Container Platform container images in a private registry. This registry must be running at all times as long as the cluster is running. See the Prerequisites section for more information. You can use the oc-mirror OpenShift CLI ( oc ) plugin to mirror images to a mirror registry in your fully or partially disconnected environments. You must run oc-mirror from a system with internet connectivity in order to download the required images from the official Red Hat registries. 4.1. About the oc-mirror plugin You can use the oc-mirror OpenShift CLI ( oc ) plugin to mirror all required OpenShift Container Platform content and other images to your mirror registry by using a single tool. It provides the following features: Provides a centralized method to mirror OpenShift Container Platform releases, Operators, helm charts, and other images. Maintains update paths for OpenShift Container Platform and Operators. Uses a declarative image set configuration file to include only the OpenShift Container Platform releases, Operators, and images that your cluster needs. Performs incremental mirroring, which reduces the size of future image sets. Prunes images from the target mirror registry that were excluded from the image set configuration since the execution. Optionally generates supporting artifacts for OpenShift Update Service (OSUS) usage. When using the oc-mirror plugin, you specify which content to mirror in an image set configuration file. In this YAML file, you can fine-tune the configuration to only include the OpenShift Container Platform releases and Operators that your cluster needs. This reduces the amount of data that you need to download and transfer. The oc-mirror plugin can also mirror arbitrary helm charts and additional container images to assist users in seamlessly synchronizing their workloads onto mirror registries. The first time you run the oc-mirror plugin, it populates your mirror registry with the required content to perform your disconnected cluster installation or update. In order for your disconnected cluster to continue receiving updates, you must keep your mirror registry updated. To update your mirror registry, you run the oc-mirror plugin using the same configuration as the first time you ran it. The oc-mirror plugin references the metadata from the storage backend and only downloads what has been released since the last time you ran the tool. This provides update paths for OpenShift Container Platform and Operators and performs dependency resolution as required. 4.1.1. High level workflow The following steps outline the high-level workflow on how to use the oc-mirror plugin to mirror images to a mirror registry: Create an image set configuration file. Mirror the image set to the target mirror registry by using one of the following methods: Mirror an image set directly to the target mirror registry. Mirror an image set to disk, transfer the image set to the target environment, then upload the image set to the target mirror registry. Configure your cluster to use the resources generated by the oc-mirror plugin. Repeat these steps to update your target mirror registry as necessary. Important When using the oc-mirror CLI plugin to populate a mirror registry, any further updates to the target mirror registry must be made by using the oc-mirror plugin. 4.2. oc-mirror plugin compatibility and support The oc-mirror plugin supports mirroring OpenShift Container Platform payload images and Operator catalogs for OpenShift Container Platform versions 4.12 and later. Note On aarch64 , ppc64le , and s390x architectures the oc-mirror plugin is only supported for OpenShift Container Platform versions 4.14 and later. Use the latest available version of the oc-mirror plugin regardless of which versions of OpenShift Container Platform you need to mirror. Additional resources For information on updating oc-mirror, see Viewing the image pull source . 4.3. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry that supports Docker v2-2 , such as Red Hat Quay. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , which is a small-scale container registry included with OpenShift Container Platform subscriptions. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The OpenShift image registry cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional resources For information about viewing the CRI-O logs to view the image source, see Viewing the image pull source . 4.4. Prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as Red Hat Quay. Note If you use Red Hat Quay, you must use version 3.6 or later with the oc-mirror plugin. If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Red Hat Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat Support. If you do not already have an existing solution for a container image registry, subscribers of OpenShift Container Platform are provided a mirror registry for Red Hat OpenShift . The mirror registry for Red Hat OpenShift is included with your subscription and is a small-scale container registry that can be used to mirror the required container images of OpenShift Container Platform in disconnected installations. 4.5. Preparing your mirror hosts Before you can use the oc-mirror plugin to mirror images, you must install the plugin and create a container image registry credentials file to allow the mirroring from Red Hat to your mirror. 4.5.1. Installing the oc-mirror OpenShift CLI plugin Install the oc-mirror OpenShift CLI plugin to manage image sets in disconnected environments. Prerequisites You have installed the OpenShift CLI ( oc ). If you are mirroring image sets in a fully disconnected environment, ensure the following: You have installed the oc-mirror plugin on the host that has internet access. The host in the disconnected environment has access to the target mirror registry. You have set the umask parameter to 0022 on the operating system that uses oc-mirror. You have installed the correct binary for the RHEL version that you are using. Procedure Download the oc-mirror CLI plugin. Navigate to the Downloads page of the OpenShift Cluster Manager . Under the OpenShift disconnected installation tools section, click Download for OpenShift Client (oc) mirror plugin and save the file. Extract the archive: USD tar xvzf oc-mirror.tar.gz If necessary, update the plugin file to be executable: USD chmod +x oc-mirror Note Do not rename the oc-mirror file. Install the oc-mirror CLI plugin by placing the file in your PATH , for example, /usr/local/bin : USD sudo mv oc-mirror /usr/local/bin/. Verification Verify that the plugin for oc-mirror v1 is successfully installed by running the following command: USD oc mirror help Additional resources Installing and using CLI plugins 4.5.2. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that enables you to mirror images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Warning This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret. Prerequisites You configured a mirror registry to use in your disconnected environment. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from Red Hat OpenShift Cluster Manager . Make a copy of your pull secret in JSON format: USD cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Save the file as either ~/.docker/config.json or USDXDG_RUNTIME_DIR/containers/auth.json : If the .docker or USDXDG_RUNTIME_DIR/containers directories do not exist, create one by entering the following command: USD mkdir -p <directory_name> Where <directory_name> is either ~/.docker or USDXDG_RUNTIME_DIR/containers . Copy the pull secret to the appropriate directory by entering the following command: USD cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file> Where <directory_name> is either ~/.docker or USDXDG_RUNTIME_DIR/containers , and <auth_file> is either config.json or auth.json . Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Edit the JSON file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 Specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 Specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 4.6. Creating the image set configuration Before you can use the oc-mirror plugin to mirror image sets, you must create an image set configuration file. This image set configuration file defines which OpenShift Container Platform releases, Operators, and other images to mirror, along with other configuration settings for the oc-mirror plugin. You must specify a storage backend in the image set configuration file. This storage backend can be a local directory or a registry that supports Docker v2-2 . The oc-mirror plugin stores metadata in this storage backend during image set creation. Important Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry. Prerequisites You have created a container image registry credentials file. For instructions, see "Configuring credentials that allow images to be mirrored". Procedure Use the oc mirror init command to create a template for the image set configuration and save it to a file called imageset-config.yaml : USD oc mirror init --registry <storage_backend> > imageset-config.yaml 1 1 Specifies the location of your storage backend, such as example.com/mirror/oc-mirror-metadata . Edit the file and adjust the settings as necessary: kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.16 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 6 packages: - name: serverless-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {} 1 Add archiveSize to set the maximum size, in GiB, of each file within the image set. 2 Set the back-end location to save the image set metadata to. This location can be a registry or local directory. It is required to specify storageConfig values. 3 Set the registry URL for the storage backend. 4 Set the channel to retrieve the OpenShift Container Platform images from. 5 Add graph: true to build and push the graph-data image to the mirror registry. The graph-data image is required to create OpenShift Update Service (OSUS). The graph: true field also generates the UpdateService custom resource manifest. The oc command-line interface (CLI) can use the UpdateService custom resource manifest to create OSUS. For more information, see About the OpenShift Update Service . 6 Set the Operator catalog to retrieve the OpenShift Container Platform images from. 7 Specify only certain Operator packages to include in the image set. Remove this field to retrieve all packages in the catalog. 8 Specify only certain channels of the Operator packages to include in the image set. You must always include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: oc mirror list operators --catalog=<catalog_name> --package=<package_name> . 9 Specify any additional images to include in image set. Note The graph: true field also mirrors the ubi-micro image along with other mirrored images. When upgrading OpenShift Container Platform Extended Update Support (EUS) versions, an intermediate version might be required between the current and target versions. For example, if the current version is 4.14 and target version is 4.16 , you might need to include a version such as 4.15.8 in the ImageSetConfiguration when using the oc-mirror plugin v1. The oc-mirror plugin v1 might not always detect this automatically, so check the Cincinnati graph web page to confirm any required intermediate versions and add them manually to your configuration. See "Image set configuration parameters" for the full list of parameters and "Image set configuration examples" for various mirroring use cases. Save the updated file. This image set configuration file is required by the oc mirror command when mirroring content. Additional resources Image set configuration parameters Image set configuration examples Using the OpenShift Update Service in a disconnected environment 4.7. Mirroring an image set to a mirror registry You can use the oc-mirror CLI plugin to mirror images to a mirror registry in a partially disconnected environment or in a fully disconnected environment . These procedures assume that you already have your mirror registry set up. 4.7.1. Mirroring an image set in a partially disconnected environment In a partially disconnected environment, you can mirror an image set directly to the target mirror registry. 4.7.1.1. Mirroring from mirror to mirror You can use the oc-mirror plugin to mirror an image set directly to a target mirror registry that is accessible during image set creation. You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a Docker v2 registry. The oc-mirror plugin stores metadata in this storage backend during image set creation. Important Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry. Prerequisites You have access to the internet to get the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. You have created the image set configuration file. Procedure Run the oc mirror command to mirror the images from the specified image set configuration to a specified registry: USD oc mirror --config=./<imageset-config.yaml> \ 1 docker://registry.example:5000 2 1 Specify the image set configuration file that you created. For example, imageset-config.yaml . 2 Specify the registry to mirror the image set file to. The registry must start with docker:// . If you specify a top-level namespace for the mirror registry, you must also use this same namespace on subsequent executions. Verification Navigate into the oc-mirror-workspace/ directory that was generated. Navigate into the results directory, for example, results-1639608409/ . Verify that YAML files are present for the ImageContentSourcePolicy and CatalogSource resources. Note The repositoryDigestMirrors section of the ImageContentSourcePolicy YAML file is used for the install-config.yaml file during installation. steps Configure your cluster to use the resources generated by oc-mirror. Troubleshooting Unable to retrieve source image . 4.7.2. Mirroring an image set in a fully disconnected environment To mirror an image set in a fully disconnected environment, you must first mirror the image set to disk , then mirror the image set file on disk to a mirror . 4.7.2.1. Mirroring from mirror to disk You can use the oc-mirror plugin to generate an image set and save the contents to disk. The generated image set can then be transferred to the disconnected environment and mirrored to the target registry. Important Depending on the configuration specified in the image set configuration file, using oc-mirror to mirror images might download several hundreds of gigabytes of data to disk. The initial image set download when you populate the mirror registry is often the largest. Because you only download the images that changed since the last time you ran the command, when you run the oc-mirror plugin again, the generated image set is often smaller. You are required to specify a storage backend in the image set configuration file. This storage backend can be a local directory or a docker v2 registry. The oc-mirror plugin stores metadata in this storage backend during image set creation. Important Do not delete or modify the metadata that is generated by the oc-mirror plugin. You must use the same storage backend every time you run the oc-mirror plugin for the same mirror registry. Prerequisites You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. You have created the image set configuration file. Procedure Run the oc mirror command to mirror the images from the specified image set configuration to disk: USD oc mirror --config=./imageset-config.yaml \ 1 file://<path_to_output_directory> 2 1 Pass in the image set configuration file that was created. This procedure assumes that it is named imageset-config.yaml . 2 Specify the target directory where you want to output the image set file. The target directory path must start with file:// . Verification Navigate to your output directory: USD cd <path_to_output_directory> Verify that an image set .tar file was created: USD ls Example output mirror_seq1_000000.tar steps Transfer the image set .tar file to the disconnected environment. Troubleshooting Unable to retrieve source image . 4.7.2.2. Mirroring from disk to mirror You can use the oc-mirror plugin to mirror the contents of a generated image set to the target mirror registry. Prerequisites You have installed the OpenShift CLI ( oc ) in the disconnected environment. You have installed the oc-mirror CLI plugin in the disconnected environment. You have generated the image set file by using the oc mirror command. You have transferred the image set file to the disconnected environment. Procedure Run the oc mirror command to process the image set file on disk and mirror the contents to a target mirror registry: USD oc mirror --from=./mirror_seq1_000000.tar \ 1 docker://registry.example:5000 2 1 Pass in the image set .tar file to mirror, named mirror_seq1_000000.tar in this example. If an archiveSize value was specified in the image set configuration file, the image set might be broken up into multiple .tar files. In this situation, you can pass in a directory that contains the image set .tar files. 2 Specify the registry to mirror the image set file to. The registry must start with docker:// . If you specify a top-level namespace for the mirror registry, you must also use this same namespace on subsequent executions. This command updates the mirror registry with the image set and generates the ImageContentSourcePolicy and CatalogSource resources. Verification Navigate into the oc-mirror-workspace/ directory that was generated. Navigate into the results directory, for example, results-1639608409/ . Verify that YAML files are present for the ImageContentSourcePolicy and CatalogSource resources. steps Configure your cluster to use the resources generated by oc-mirror. Troubleshooting Unable to retrieve source image . 4.8. Configuring your cluster to use the resources generated by oc-mirror After you have mirrored your image set to the mirror registry, you must apply the generated ImageContentSourcePolicy , CatalogSource , and release image signature resources into the cluster. The ImageContentSourcePolicy resource associates the mirror registry with the source registry and redirects image pull requests from the online registries to the mirror registry. The CatalogSource resource is used by Operator Lifecycle Manager (OLM) to retrieve information about the available Operators in the mirror registry. The release image signatures are used to verify the mirrored release images. Prerequisites You have mirrored the image set to the registry mirror in the disconnected environment. You have access to the cluster as a user with the cluster-admin role. Procedure Log in to the OpenShift CLI as a user with the cluster-admin role. Apply the YAML files from the results directory to the cluster by running the following command: USD oc apply -f ./oc-mirror-workspace/results-1639608409/ If you mirrored release images, apply the release image signatures to the cluster by running the following command: USD oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/ Note If you are mirroring Operators instead of clusters, you do not need to run USD oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/ . Running that command will return an error, as there are no release image signatures to apply. Verification Verify that the ImageContentSourcePolicy resources were successfully installed by running the following command: USD oc get imagecontentsourcepolicy Verify that the CatalogSource resources were successfully installed by running the following command: USD oc get catalogsource -n openshift-marketplace 4.9. Updating your mirror registry content You can update your mirror registry content by updating the image set configuration file and mirroring the image set to the mirror registry. The time that you run the oc-mirror plugin, an image set is generated that only contains new and updated images since the execution. While updating the mirror registry, you must take into account the following considerations: Images are pruned from the target mirror registry if they are no longer included in the latest image set that was generated and mirrored. Therefore, ensure that you are updating images for the same combination of the following key components so that only a differential image set is created and mirrored: Image set configuration Destination registry Storage configuration The images can be pruned in case of disk to mirror or mirror to mirror workflow. The generated image sets must be pushed to the target mirror registry in sequence. You can derive the sequence number from the file name of the generated image set archive file. Do not delete or modify the metadata image that is generated by the oc-mirror plugin. If you specified a top-level namespace for the mirror registry during the initial image set creation, then you must use this same namespace every time you run the oc-mirror plugin for the same mirror registry. For more information about the workflow to update the mirror registry content, see the "High level workflow" section. 4.9.1. Mirror registry update examples This section covers the use cases for updating the mirror registry from disk to mirror. Example ImageSetConfiguration file that was previously used for mirroring apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.1 maxVersion: 4.12.1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable Mirroring a specific OpenShift Container Platform version by pruning the existing images Updated ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.13 1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable 1 Replacing by stable-4.13 prunes all the images of stable-4.12 . Updating to the latest version of an Operator by pruning the existing images Updated ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.1 maxVersion: 4.12.1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable 1 1 Using the same channel without specifying a version prunes the existing images and updates with the latest version of images. Mirroring a new Operator by pruning the existing Operator Updated ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.1 maxVersion: 4.12.1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: <new_operator_name> 1 channels: - name: stable 1 Replacing rhacs-operator with new_operator_name prunes the Red Hat Advanced Cluster Security for Kubernetes Operator. Pruning all the OpenShift Container Platform images Updated ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: Additional resources Image set configuration examples Mirroring an image set in a partially disconnected environment Mirroring an image set in a fully disconnected environment Configuring your cluster to use the resources generated by oc-mirror 4.10. Performing a dry run You can use oc-mirror to perform a dry run, without actually mirroring any images. This allows you to review the list of images that would be mirrored, as well as any images that would be pruned from the mirror registry. A dry run also allows you to catch any errors with your image set configuration early or use the generated list of images with other tools to carry out the mirroring operation. Prerequisites You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. You have created the image set configuration file. Procedure Run the oc mirror command with the --dry-run flag to perform a dry run: USD oc mirror --config=./imageset-config.yaml \ 1 docker://registry.example:5000 \ 2 --dry-run 3 1 Pass in the image set configuration file that was created. This procedure assumes that it is named imageset-config.yaml . 2 Specify the mirror registry. Nothing is mirrored to this registry as long as you use the --dry-run flag. 3 Use the --dry-run flag to generate the dry run artifacts and not an actual image set file. Example output Checking push permissions for registry.example:5000 Creating directory: oc-mirror-workspace/src/publish Creating directory: oc-mirror-workspace/src/v2 Creating directory: oc-mirror-workspace/src/charts Creating directory: oc-mirror-workspace/src/release-signatures No metadata detected, creating new workspace wrote mirroring manifests to oc-mirror-workspace/operators.1658342351/manifests-redhat-operator-index ... info: Planning completed in 31.48s info: Dry run complete Writing image mapping to oc-mirror-workspace/mapping.txt Navigate into the workspace directory that was generated: USD cd oc-mirror-workspace/ Review the mapping.txt file that was generated. This file contains a list of all images that would be mirrored. Review the pruning-plan.json file that was generated. This file contains a list of all images that would be pruned from the mirror registry when the image set is published. Note The pruning-plan.json file is only generated if your oc-mirror command points to your mirror registry and there are images to be pruned. 4.11. Including local OCI Operator catalogs While mirroring OpenShift Container Platform releases, Operator catalogs, and additional images from a registry to a partially disconnected cluster, you can include Operator catalog images from a local file-based catalog on disk. The local catalog must be in the Open Container Initiative (OCI) format. The local catalog and its contents are mirrored to your target mirror registry based on the filtering information in the image set configuration file. Important When mirroring local OCI catalogs, any OpenShift Container Platform releases or additional images that you want to mirror along with the local OCI-formatted catalog must be pulled from a registry. You cannot mirror OCI catalogs along with an oc-mirror image set file on disk. One example use case for using the OCI feature is if you have a CI/CD system building an OCI catalog to a location on disk, and you want to mirror that OCI catalog along with an OpenShift Container Platform release to your mirror registry. Note If you used the Technology Preview OCI local catalogs feature for the oc-mirror plugin for OpenShift Container Platform 4.12, you can no longer use the OCI local catalogs feature of the oc-mirror plugin to copy a catalog locally and convert it to OCI format as a first step to mirroring to a fully disconnected cluster. Prerequisites You have access to the internet to obtain the necessary container images. You have installed the OpenShift CLI ( oc ). You have installed the oc-mirror CLI plugin. Procedure Create the image set configuration file and adjust the settings as necessary. The following example image set configuration mirrors an OCI catalog on disk along with an OpenShift Container Platform release and a UBI image from registry.redhat.io . kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: local: path: /home/user/metadata 1 mirror: platform: channels: - name: stable-4.16 2 type: ocp graph: false operators: - catalog: oci:///home/user/oc-mirror/my-oci-catalog 3 targetCatalog: my-namespace/redhat-operator-index 4 packages: - name: aws-load-balancer-operator - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 5 packages: - name: rhacs-operator additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 6 1 Set the back-end location to save the image set metadata to. This location can be a registry or local directory. It is required to specify storageConfig values. 2 Optionally, include an OpenShift Container Platform release to mirror from registry.redhat.io . 3 Specify the absolute path to the location of the OCI catalog on disk. The path must start with oci:// when using the OCI feature. 4 Optionally, specify an alternative namespace and name to mirror the catalog as. 5 Optionally, specify additional Operator catalogs to pull from a registry. 6 Optionally, specify additional images to pull from a registry. Run the oc mirror command to mirror the OCI catalog to a target mirror registry: USD oc mirror --config=./imageset-config.yaml \ 1 docker://registry.example:5000 2 1 Pass in the image set configuration file. This procedure assumes that it is named imageset-config.yaml . 2 Specify the registry to mirror the content to. The registry must start with docker:// . If you specify a top-level namespace for the mirror registry, you must also use this same namespace on subsequent executions. Optionally, you can specify other flags to adjust the behavior of the OCI feature: --oci-insecure-signature-policy Do not push signatures to the target mirror registry. --oci-registries-config Specify the path to a TOML-formatted registries.conf file. You can use this to mirror from a different registry, such as a pre-production location for testing, without having to change the image set configuration file. This flag only affects local OCI catalogs, not any other mirrored content. Example registries.conf file [[registry]] location = "registry.redhat.io:5000" insecure = false blocked = false mirror-by-digest-only = true prefix = "" [[registry.mirror]] location = "preprod-registry.example.com" insecure = false steps Configure your cluster to use the resources generated by oc-mirror. Additional resources Configuring your cluster to use the resources generated by oc-mirror 4.12. Image set configuration parameters The oc-mirror plugin requires an image set configuration file that defines what images to mirror. The following table lists the available parameters for the ImageSetConfiguration resource. Table 4.1. ImageSetConfiguration parameters Parameter Description Values apiVersion The API version for the ImageSetConfiguration content. String. For example: mirror.openshift.io/v1alpha2 . archiveSize The maximum size, in GiB, of each archive file within the image set. Integer. For example: 4 mirror The configuration of the image set. Object mirror.additionalImages The additional images configuration of the image set. Array of objects. For example: additionalImages: - name: registry.redhat.io/ubi8/ubi:latest mirror.additionalImages.name The tag or digest of the image to mirror. String. For example: registry.redhat.io/ubi8/ubi:latest mirror.blockedImages The full tag, digest, or pattern of images to block from mirroring. Array of strings. For example: docker.io/library/alpine mirror.helm The helm configuration of the image set. Note that the oc-mirror plugin supports only helm charts that do not require user input when rendered. Object mirror.helm.local The local helm charts to mirror. Array of objects. For example: local: - name: podinfo path: /test/podinfo-5.0.0.tar.gz mirror.helm.local.name The name of the local helm chart to mirror. String. For example: podinfo . mirror.helm.local.path The path of the local helm chart to mirror. String. For example: /test/podinfo-5.0.0.tar.gz . mirror.helm.repositories The remote helm repositories to mirror from. Array of objects. For example: repositories: - name: podinfo url: https://example.github.io/podinfo charts: - name: podinfo version: 5.0.0 mirror.helm.repositories.name The name of the helm repository to mirror from. String. For example: podinfo . mirror.helm.repositories.url The URL of the helm repository to mirror from. String. For example: https://example.github.io/podinfo . mirror.helm.repositories.charts The remote helm charts to mirror. Array of objects. mirror.helm.repositories.charts.name The name of the helm chart to mirror. String. For example: podinfo . mirror.helm.repositories.charts.version The version of the named helm chart to mirror. String. For example: 5.0.0 . mirror.operators The Operators configuration of the image set. Array of objects. For example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: elasticsearch-operator minVersion: '2.4.0' mirror.operators.catalog The Operator catalog to include in the image set. String. For example: registry.redhat.io/redhat/redhat-operator-index:v4.16 . mirror.operators.full When true , downloads the full catalog, Operator package, or Operator channel. Boolean. The default value is false . mirror.operators.packages The Operator packages configuration. Array of objects. For example: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: elasticsearch-operator minVersion: '5.2.3-31' mirror.operators.packages.name The Operator package name to include in the image set String. For example: elasticsearch-operator . mirror.operators.packages.channels The Operator package channel configuration. Object mirror.operators.packages.channels.name The Operator channel name, unique within a package, to include in the image set. String. For example: fast or stable-v4.16 . mirror.operators.packages.channels.maxVersion The highest version of the Operator mirror across all channels in which it exists. See the following note for further information. String. For example: 5.2.3-31 mirror.operators.packages.channels.minBundle The name of the minimum bundle to include, plus all bundles in the update graph to the channel head. Set this field only if the named bundle has no semantic version metadata. String. For example: bundleName mirror.operators.packages.channels.minVersion The lowest version of the Operator to mirror across all channels in which it exists. See the following note for further information. String. For example: 5.2.3-31 mirror.operators.packages.maxVersion The highest version of the Operator to mirror across all channels in which it exists. See the following note for further information. String. For example: 5.2.3-31 . mirror.operators.packages.minVersion The lowest version of the Operator to mirror across all channels in which it exists. See the following note for further information. String. For example: 5.2.3-31 . mirror.operators.skipDependencies If true , dependencies of bundles are not included. Boolean. The default value is false . mirror.operators.targetCatalog An alternative name and optional namespace hierarchy to mirror the referenced catalog as. String. For example: my-namespace/my-operator-catalog mirror.operators.targetName An alternative name to mirror the referenced catalog as. The targetName parameter is deprecated. Use the targetCatalog parameter instead. String. For example: my-operator-catalog mirror.operators.targetTag An alternative tag to append to the targetName or targetCatalog . String. For example: v1 mirror.platform The platform configuration of the image set. Object mirror.platform.architectures The architecture of the platform release payload to mirror. Array of strings. For example: architectures: - amd64 - arm64 - multi - ppc64le - s390x The default value is amd64 . The value multi ensures that the mirroring is supported for all available architectures, eliminating the need to specify individual architectures. mirror.platform.channels The platform channel configuration of the image set. Array of objects. For example: channels: - name: stable-4.10 - name: stable-4.16 mirror.platform.channels.full When true , sets the minVersion to the first release in the channel and the maxVersion to the last release in the channel. Boolean. The default value is false . mirror.platform.channels.name The name of the release channel. String. For example: stable-4.16 mirror.platform.channels.minVersion The minimum version of the referenced platform to be mirrored. String. For example: 4.12.6 mirror.platform.channels.maxVersion The highest version of the referenced platform to be mirrored. String. For example: 4.16.1 mirror.platform.channels.shortestPath Toggles shortest path mirroring or full range mirroring. Boolean. The default value is false . mirror.platform.channels.type The type of the platform to be mirrored. String. For example: ocp or okd . The default is ocp . mirror.platform.graph Indicates whether the OSUS graph is added to the image set and subsequently published to the mirror. Boolean. The default value is false . storageConfig The back-end configuration of the image set. Object storageConfig.local The local back-end configuration of the image set. Object storageConfig.local.path The path of the directory to contain the image set metadata. String. For example: ./path/to/dir/ . storageConfig.registry The registry back-end configuration of the image set. Object storageConfig.registry.imageURL The back-end registry URI. Can optionally include a namespace reference in the URI. String. For example: quay.io/myuser/imageset:metadata . storageConfig.registry.skipTLS Optionally skip TLS verification of the referenced back-end registry. Boolean. The default value is false . Note Using the minVersion and maxVersion properties to filter for a specific Operator version range can result in a multiple channel heads error. The error message states that there are multiple channel heads . This is because when the filter is applied, the update graph of the Operator is truncated. Operator Lifecycle Manager requires that every Operator channel contains versions that form an update graph with exactly one end point, that is, the latest version of the Operator. When the filter range is applied, that graph can turn into two or more separate graphs or a graph that has more than one end point. To avoid this error, do not filter out the latest version of an Operator. If you still run into the error, depending on the Operator, either the maxVersion property must be increased or the minVersion property must be decreased. Because every Operator graph can be different, you might need to adjust these values until the error resolves. 4.13. Image set configuration examples The following ImageSetConfiguration file examples show the configuration for various mirroring use cases. Use case: Including the shortest OpenShift Container Platform update path The following ImageSetConfiguration file uses a local storage backend and includes all OpenShift Container Platform versions along the shortest update path from the minimum version of 4.11.37 to the maximum version of 4.12.15 . Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.11.37 maxVersion: 4.12.15 shortestPath: true Use case: Including all versions of OpenShift Container Platform from a minimum to the latest version for multi-architecture releases The following ImageSetConfiguration file uses a registry storage backend and includes all OpenShift Container Platform versions starting at a minimum version of 4.13.4 to the latest version in the channel. On every invocation of oc-mirror with this image set configuration, the latest release of the stable-4.13 channel is evaluated, so running oc-mirror at regular intervals ensures that you automatically receive the latest releases of OpenShift Container Platform images. By setting the value of platform.architectures to multi , you can ensure that the mirroring is supported for multi-architecture releases. Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - "multi" channels: - name: stable-4.13 minVersion: 4.13.4 maxVersion: 4.13.6 Use case: Including Operator versions from a minimum to the latest The following ImageSetConfiguration file uses a local storage backend and includes only the Red Hat Advanced Cluster Security for Kubernetes Operator, versions starting at 4.0.1 and later in the stable channel. Note When you specify a minimum or maximum version range, you might not receive all Operator versions in that range. By default, oc-mirror excludes any versions that are skipped or replaced by a newer version in the Operator Lifecycle Manager (OLM) specification. Operator versions that are skipped might be affected by a CVE or contain bugs. Use a newer version instead. For more information on skipped and replaced versions, see Creating an update graph with OLM . To receive all Operator versions in a specified range, you can set the mirror.operators.full field to true . Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: rhacs-operator channels: - name: stable minVersion: 4.0.1 Note To specify a maximum version instead of the latest, set the mirror.operators.packages.channels.maxVersion field. Use case: Including the Nutanix CSI Operator The following ImageSetConfiguration file uses a local storage backend and includes the Nutanix CSI Operator, the OpenShift Update Service (OSUS) graph image, and an additional Red Hat Universal Base Image (UBI). Example ImageSetConfiguration file kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: mylocalregistry/ocp-mirror/openshift4 skipTLS: false mirror: platform: channels: - name: stable-4.11 type: ocp graph: true operators: - catalog: registry.redhat.io/redhat/certified-operator-index:v4.16 packages: - name: nutanixcsioperator channels: - name: stable additionalImages: - name: registry.redhat.io/ubi9/ubi:latest Use case: Including the default Operator channel The following ImageSetConfiguration file includes the stable-5.7 and stable channels for the OpenShift Elasticsearch Operator. Even if only the packages from the stable-5.7 channel are needed, the stable channel must also be included in the ImageSetConfiguration file, because it is the default channel for the Operator. You must always include the default channel for the Operator package even if you do not use the bundles in that channel. Tip You can find the default channel by running the following command: oc mirror list operators --catalog=<catalog_name> --package=<package_name> . Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: elasticsearch-operator channels: - name: stable-5.7 - name: stable Use case: Including an entire catalog (all versions) The following ImageSetConfiguration file sets the mirror.operators.full field to true to include all versions for an entire Operator catalog. Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 full: true Use case: Including an entire catalog (channel heads only) The following ImageSetConfiguration file includes the channel heads for an entire Operator catalog. By default, for each Operator in the catalog, oc-mirror includes the latest Operator version (channel head) from the default channel. If you want to mirror all Operator versions, and not just the channel heads, you must set the mirror.operators.full field to true . This example also uses the targetCatalog field to specify an alternative namespace and name to mirror the catalog as. Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 targetCatalog: my-namespace/my-operator-catalog Use case: Including arbitrary images and helm charts The following ImageSetConfiguration file uses a registry storage backend and includes helm charts and an additional Red Hat Universal Base Image (UBI). Example ImageSetConfiguration file apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration archiveSize: 4 storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - "s390x" channels: - name: stable-4.16 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 helm: repositories: - name: redhat-helm-charts url: https://raw.githubusercontent.com/redhat-developer/redhat-helm-charts/master charts: - name: ibm-mongodb-enterprise-helm version: 0.2.0 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest Use case: Including the upgrade path for EUS releases The following ImageSetConfiguration file includes the eus-<version> channel, where the maxVersion value is at least two minor versions higher than the minVersion value. For example, in this ImageSetConfiguration file, the minVersion is set to 4.12.28 , while the maxVersion for the eus-4.14 channel is 4.14.16 . Example ImageSetConfiguration file kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v2alpha1 mirror: platform: graph: true # Required for the OSUS Operator architectures: - amd64 channels: - name: stable-4.12 minVersion: '4.12.28' maxVersion: '4.12.28' shortestPath: true type: ocp - name: eus-4.14 minVersion: '4.12.28' maxVersion: '4.14.16' shortestPath: true type: ocp 4.14. Command reference for oc-mirror The following tables describe the oc mirror subcommands and flags: Table 4.2. oc mirror subcommands Subcommand Description completion Generate the autocompletion script for the specified shell. describe Output the contents of an image set. help Show help about any subcommand. init Output an initial image set configuration template. list List available platform and Operator content and their version. version Output the oc-mirror version. Table 4.3. oc mirror flags Flag Description -c , --config <string> Specify the path to an image set configuration file. --continue-on-error If any non image-pull related error occurs, continue and attempt to mirror as much as possible. --dest-skip-tls Disable TLS validation for the target registry. --dest-use-http Use plain HTTP for the target registry. --dry-run Print actions without mirroring images. Generates mapping.txt and pruning-plan.json files. --from <string> Specify the path to an image set archive that was generated by an execution of oc-mirror to load into a target registry. -h , --help Show the help. --ignore-history Ignore past mirrors when downloading images and packing layers. Disables incremental mirroring and might download more data. --manifests-only Generate manifests for ImageContentSourcePolicy objects to configure a cluster to use the mirror registry, but do not actually mirror any images. To use this flag, you must pass in an image set archive with the --from flag. --max-nested-paths <int> Specify the maximum number of nested paths for destination registries that limit nested paths. The default is 0 . --max-per-registry <int> Specify the number of concurrent requests allowed per registry. The default is 6 . --oci-insecure-signature-policy Do not push signatures when mirroring local OCI catalogs (with --include-local-oci-catalogs ). --oci-registries-config Provide a registries configuration file to specify an alternative registry location to copy from when mirroring local OCI catalogs (with --include-local-oci-catalogs ). --skip-cleanup Skip removal of artifact directories. --skip-image-pin Do not replace image tags with digest pins in Operator catalogs. --skip-metadata-check Skip metadata when publishing an image set. This is only recommended when the image set was created with --ignore-history . --skip-missing If an image is not found, skip it instead of reporting an error and aborting execution. Does not apply to custom images explicitly specified in the image set configuration. --skip-pruning Disable automatic pruning of images from the target mirror registry. --skip-verification Skip digest verification. --source-skip-tls Disable TLS validation for the source registry. --source-use-http Use plain HTTP for the source registry. -v , --verbose <int> Specify the number for the log level verbosity. Valid values are 0 - 9 . The default is 0 . 4.15. Additional resources About cluster updates in a disconnected environment
[ "tar xvzf oc-mirror.tar.gz", "chmod +x oc-mirror", "sudo mv oc-mirror /usr/local/bin/.", "oc mirror help", "cat ./pull-secret | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "mkdir -p <directory_name>", "cp <path>/<pull_secret_file_in_json> <directory_name>/<auth_file>", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "oc mirror init --registry <storage_backend> > imageset-config.yaml 1", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.16 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 6 packages: - name: serverless-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {}", "oc mirror --config=./<imageset-config.yaml> \\ 1 docker://registry.example:5000 2", "oc mirror --config=./imageset-config.yaml \\ 1 file://<path_to_output_directory> 2", "cd <path_to_output_directory>", "ls", "mirror_seq1_000000.tar", "oc mirror --from=./mirror_seq1_000000.tar \\ 1 docker://registry.example:5000 2", "oc apply -f ./oc-mirror-workspace/results-1639608409/", "oc apply -f ./oc-mirror-workspace/results-1639608409/release-signatures/", "oc get imagecontentsourcepolicy", "oc get catalogsource -n openshift-marketplace", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.1 maxVersion: 4.12.1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.13 1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.1 maxVersion: 4.12.1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: rhacs-operator channels: - name: stable 1", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.12.1 maxVersion: 4.12.1 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages: - name: <new_operator_name> 1 channels: - name: stable", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 packages:", "oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 \\ 2 --dry-run 3", "Checking push permissions for registry.example:5000 Creating directory: oc-mirror-workspace/src/publish Creating directory: oc-mirror-workspace/src/v2 Creating directory: oc-mirror-workspace/src/charts Creating directory: oc-mirror-workspace/src/release-signatures No metadata detected, creating new workspace wrote mirroring manifests to oc-mirror-workspace/operators.1658342351/manifests-redhat-operator-index info: Planning completed in 31.48s info: Dry run complete Writing image mapping to oc-mirror-workspace/mapping.txt", "cd oc-mirror-workspace/", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: local: path: /home/user/metadata 1 mirror: platform: channels: - name: stable-4.16 2 type: ocp graph: false operators: - catalog: oci:///home/user/oc-mirror/my-oci-catalog 3 targetCatalog: my-namespace/redhat-operator-index 4 packages: - name: aws-load-balancer-operator - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 5 packages: - name: rhacs-operator additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 6", "oc mirror --config=./imageset-config.yaml \\ 1 docker://registry.example:5000 2", "[[registry]] location = \"registry.redhat.io:5000\" insecure = false blocked = false mirror-by-digest-only = true prefix = \"\" [[registry.mirror]] location = \"preprod-registry.example.com\" insecure = false", "additionalImages: - name: registry.redhat.io/ubi8/ubi:latest", "local: - name: podinfo path: /test/podinfo-5.0.0.tar.gz", "repositories: - name: podinfo url: https://example.github.io/podinfo charts: - name: podinfo version: 5.0.0", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: elasticsearch-operator minVersion: '2.4.0'", "operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: elasticsearch-operator minVersion: '5.2.3-31'", "architectures: - amd64 - arm64 - multi - ppc64le - s390x", "channels: - name: stable-4.10 - name: stable-4.16", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: platform: channels: - name: stable-4.12 minVersion: 4.11.37 maxVersion: 4.12.15 shortestPath: true", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - \"multi\" channels: - name: stable-4.13 minVersion: 4.13.4 maxVersion: 4.13.6", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: /home/user/metadata mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: rhacs-operator channels: - name: stable minVersion: 4.0.1", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 storageConfig: registry: imageURL: mylocalregistry/ocp-mirror/openshift4 skipTLS: false mirror: platform: channels: - name: stable-4.11 type: ocp graph: true operators: - catalog: registry.redhat.io/redhat/certified-operator-index:v4.16 packages: - name: nutanixcsioperator channels: - name: stable additionalImages: - name: registry.redhat.io/ubi9/ubi:latest", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 packages: - name: elasticsearch-operator channels: - name: stable-5.7 - name: stable", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 full: true", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 targetCatalog: my-namespace/my-operator-catalog", "apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration archiveSize: 4 storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: architectures: - \"s390x\" channels: - name: stable-4.16 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.16 helm: repositories: - name: redhat-helm-charts url: https://raw.githubusercontent.com/redhat-developer/redhat-helm-charts/master charts: - name: ibm-mongodb-enterprise-helm version: 0.2.0 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v2alpha1 mirror: platform: graph: true # Required for the OSUS Operator architectures: - amd64 channels: - name: stable-4.12 minVersion: '4.12.28' maxVersion: '4.12.28' shortestPath: true type: ocp - name: eus-4.14 minVersion: '4.12.28' maxVersion: '4.14.16' shortestPath: true type: ocp" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/disconnected_installation_mirroring/installing-mirroring-disconnected
Chapter 1. Security APIs
Chapter 1. Security APIs 1.1. CertificateSigningRequest [certificates.k8s.io/v1] Description CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued. Kubelets use this API to obtain: 1. client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client-kubelet" signerName). 2. serving certificates for TLS endpoints kube-apiserver can connect to securely (with the "kubernetes.io/kubelet-serving" signerName). This API can be used to request client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client" signerName), or to obtain certificates from custom non-Kubernetes signers. Type object 1.2. CredentialsRequest [cloudcredential.openshift.io/v1] Description CredentialsRequest is the Schema for the credentialsrequests API Type object 1.3. PodSecurityPolicyReview [security.openshift.io/v1] Description PodSecurityPolicyReview checks which service accounts (not users, since that would be cluster-wide) can create the PodTemplateSpec in question. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.4. PodSecurityPolicySelfSubjectReview [security.openshift.io/v1] Description PodSecurityPolicySelfSubjectReview checks whether this user/SA tuple can create the PodTemplateSpec Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.5. PodSecurityPolicySubjectReview [security.openshift.io/v1] Description PodSecurityPolicySubjectReview checks whether a particular user/SA tuple can create the PodTemplateSpec. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.6. RangeAllocation [security.openshift.io/v1] Description RangeAllocation is used so we can easily expose a RangeAllocation typed for security group Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object 1.7. Secret [v1] Description Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes. Type object 1.8. SecurityContextConstraints [security.openshift.io/v1] Description SecurityContextConstraints governs the ability to make requests that affect the SecurityContext that will be applied to a container. For historical reasons SCC was exposed under the core Kubernetes API group. That exposure is deprecated and will be removed in a future release - users should instead use the security.openshift.io group to manage SecurityContextConstraints. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.9. ServiceAccount [v1] Description ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_apis/security-apis
20.21. Example Domain XML Configuration
20.21. Example Domain XML Configuration QEMU emulated guest virtual machine on AMD64 and Intel <domain type='qemu'> <name>QEmu-fedora-i686</name> <uuid>c7a5fdbd-cdaf-9455-926a-d65c16db1809</uuid> <memory>219200</memory> <currentMemory>219200</currentMemory> <vcpu>2</vcpu> <os> <type arch='i686' machine='pc'>hvm</type> <boot dev='cdrom'/> </os> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='cdrom'> <source file='/home/user/boot.iso'/> <target dev='hdc'/> <readonly/> </disk> <disk type='file' device='disk'> <source file='/home/user/fedora.img'/> <target dev='hda'/> </disk> <interface type='network'> <source network='default'/> </interface> <graphics type='vnc' port='-1'/> </devices> </domain> Figure 20.70. Example domain XML config KVM hardware accelerated guest virtual machine on i686 <domain type='kvm'> <name>demo2</name> <uuid>4dea24b3-1d52-d8f3-2516-782e98a23fa0</uuid> <memory>131072</memory> <vcpu>1</vcpu> <os> <type arch="i686">hvm</type> </os> <clock sync="localtime"/> <devices> <emulator>/usr/bin/qemu-kvm</emulator> <disk type='file' device='disk'> <source file='/var/lib/libvirt/images/demo2.img'/> <target dev='hda'/> </disk> <interface type='network'> <source network='default'/> <mac address='24:42:53:21:52:45'/> </interface> <graphics type='vnc' port='-1' keymap='de'/> </devices> </domain> Figure 20.71. Example domain XML config
[ "<domain type='qemu'> <name>QEmu-fedora-i686</name> <uuid>c7a5fdbd-cdaf-9455-926a-d65c16db1809</uuid> <memory>219200</memory> <currentMemory>219200</currentMemory> <vcpu>2</vcpu> <os> <type arch='i686' machine='pc'>hvm</type> <boot dev='cdrom'/> </os> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='cdrom'> <source file='/home/user/boot.iso'/> <target dev='hdc'/> <readonly/> </disk> <disk type='file' device='disk'> <source file='/home/user/fedora.img'/> <target dev='hda'/> </disk> <interface type='network'> <source network='default'/> </interface> <graphics type='vnc' port='-1'/> </devices> </domain>", "<domain type='kvm'> <name>demo2</name> <uuid>4dea24b3-1d52-d8f3-2516-782e98a23fa0</uuid> <memory>131072</memory> <vcpu>1</vcpu> <os> <type arch=\"i686\">hvm</type> </os> <clock sync=\"localtime\"/> <devices> <emulator>/usr/bin/qemu-kvm</emulator> <disk type='file' device='disk'> <source file='/var/lib/libvirt/images/demo2.img'/> <target dev='hda'/> </disk> <interface type='network'> <source network='default'/> <mac address='24:42:53:21:52:45'/> </interface> <graphics type='vnc' port='-1' keymap='de'/> </devices> </domain>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/section-libvirt-dom-xml-example
1.3. GFS Software Subsystems
1.3. GFS Software Subsystems Table 1.1, "GFS Software Subsystem Components" summarizes the GFS Software subsystems and their components. Table 1.1. GFS Software Subsystem Components Software Subsystem Components Description GFS gfs.ko Kernel module that implements the GFS file system and is loaded on GFS cluster nodes. gfs_fsck Command that repairs an unmounted GFS file system. gfs_grow Command that grows a mounted GFS file system. gfs_jadd Command that adds journals to a mounted GFS file system. gfs_mkfs Command that creates a GFS file system on a storage device. gfs_quota Command that manages quotas on a mounted GFS file system. gfs_tool Command that configures or tunes a GFS file system. This command can also gather a variety of information about the file system. lock_harness.ko Implements a pluggable lock module interface for GFS that allows for a variety of locking mechanisms to be used (for example, the DLM lock module, lock_dlm.ko ). lock_dlm.ko A lock module that implements DLM locking for GFS. It plugs into the lock harness, lock_harness.ko and communicates with the DLM lock manager in Red Hat Cluster Suite. lock_gulm.ko A lock module that implements GULM locking for GFS. It plugs into the lock harness, lock_harness.ko and communicates with the GULM lock manager in Red Hat Cluster Suite. lock_nolock.ko A lock module for use when GFS is used as a local file system only. It plugs into the lock harness, lock_harness.ko and provides local locking.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s1-ov-subsystems
Chapter 4. OpenShift Service Mesh and cert-manager
Chapter 4. OpenShift Service Mesh and cert-manager The cert-manager tool is a solution for X.509 certificate management on Kubernetes. It delivers a unified API to integrate applications with private or public key infrastructure (PKI), such as Vault, Google Cloud Certificate Authority Service, Let's Encrypt, and other providers. Important The cert-manager tool must be installed before you create and install your Istio resource. The cert-manager tool ensures the certificates are valid and up-to-date by attempting to renew certificates at a configured time before they expire. 4.1. About integrating Service Mesh with cert-manager and istio-csr The cert-manager tool provides integration with Istio through an external agent called istio-csr . The istio-csr agent handles certificate signing requests (CSR) from Istio proxies and the controlplane in the following ways: Verifying the identity of the workload. Creating a CSR through cert-manager for the workload. The cert-manager tool then creates a CSR to the configured CA Issuer, which signs the certificate. Note Red Hat provides support for integrating with istio-csr and cert-manager. Red Hat does not provide direct support for the istio-csr or the community cert-manager components. The use of community cert-manager shown here is for demonstration purposes only. Prerequisites One of these versions of cert-manager: Red Hat cert-manager Operator 1.10 or later community cert-manager Operator 1.11 or later cert-manager 1.11 or later Red Hat OpenShift Service Mesh 3.0 or later An IstioCNI instance is running in the cluster Istio CLI ( istioctl ) tool is installed jq is installed Helm is installed 4.2. Installing cert-manager You can integrate cert-manager with OpenShift Service Mesh by deploying istio-csr and then creating an Istio resource that uses the istio-csr agent to process workload and control plane certificate signing requests. This example creates a self-signed Issuer , but any other Issuer can be used instead. Important You must install cert-manager before installing your Istio resource. Procedure Create the istio-system namespace by running the following command: USD oc create namespace istio-system Create the root issuer by creating an Issuer object in a YAML file. Create an Issuer object similar to the following example: Example issuer.yaml file apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned namespace: istio-system spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: istio-system spec: isCA: true duration: 87600h # 10 years secretName: istio-ca commonName: istio-ca privateKey: algorithm: ECDSA size: 256 subject: organizations: - cluster.local - cert-manager issuerRef: name: selfsigned kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: istio-ca namespace: istio-system spec: ca: secretName: istio-ca --- Create the objects by running the following command: USD oc apply -f issuer.yaml Wait for the istio-ca certificate to contain the "Ready" status condition by running the following command: USD oc wait --for=condition=Ready certificates/istio-ca -n istio-system Copy the istio-ca certificate to the cert-manager namespace so it can be used by istio-csr: Copy the secret to a local file by running the following command: USD oc get -n istio-system secret istio-ca -o jsonpath='{.data.tls\.crt}' | base64 -d > ca.pem Create a secret from the local certificate file in the cert-manager namespace by running the following command: USD oc create secret generic -n cert-manager istio-root-ca --from-file=ca.pem=ca.pem steps To install istio-csr , you must follow the istio-csr installation instructions for the type of update strategy you want. By default, spec.updateStrategy is set to InPlace when you create and install your Istio resource. You create and install your Istio resource after you install istio-csr . Installing the istio-csr agent by using the in place update strategy Installing the istio-csr agent by using the revision based update strategy 4.2.1. Installing the istio-csr agent by using the in place update strategy Istio resources use the in place update strategy by default. Follow this procedure if you plan to leave spec.updateStrategy as InPlace when you create and install your Istio resource. Procedure Add the Jetstack charts repository to your local Helm repository by running the following command: USD helm repo add jetstack https://charts.jetstack.io --force-update Install the istio-csr chart by running the following command: USD helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \ --install \ --namespace cert-manager \ --wait \ --set "app.tls.rootCAFile=/var/run/secrets/istio-csr/ca.pem" \ --set "volumeMounts[0].name=root-ca" \ --set "volumeMounts[0].mountPath=/var/run/secrets/istio-csr" \ --set "volumes[0].name=root-ca" \ --set "volumes[0].secret.secretName=istio-root-ca" \ --set "app.istio.namespace=istio-system" steps Installing your Istio resource 4.2.2. Installing the istio-csr agent by using the revision based update strategy Istio resources use the in place update strategy by default. Follow this procedure if you plan to change spec.updateStrategy to RevisionBased when you create and install your Istio resource. Procedure Specify all the Istio revisions to your istio-csr deployment. See "istio-csr deployment". Add the Jetstack charts to your local Helm repository by running the following command: USD helm repo add jetstack https://charts.jetstack.io --force-update Install the istio-csr chart with your revision name by running the following command: USD helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \ --install \ --namespace cert-manager \ --wait \ --set "app.tls.rootCAFile=/var/run/secrets/istio-csr/ca.pem" \ --set "volumeMounts[0].name=root-ca" \ --set "volumeMounts[0].mountPath=/var/run/secrets/istio-csr" \ --set "volumes[0].name=root-ca" \ --set "volumes[0].secret.secretName=istio-root-ca" \ --set "app.istio.namespace=istio-system" \ --set "app.istio.revisions={default-v1-23-0}" Note Revision names use the following format, <istio-name>-v<major_version>-<minor_version>-<patch_version> . For example: default-v1-23-0 . Additional resources istio-csr deployment steps Installing your Istio resource 4.2.3. Installing your Istio resource After you have installed istio-csr by following the procedure for either an in place or revision based update strategy, you can install the Istio resource. You need to disable Istio's built in CA server and tell istiod to use the istio-csr CA server. The istio-csr CA server issues certificates for both istiod and user workloads. Procedure Create the Istio object as shown in the following example: Example istio.yaml object apiVersion: sailoperator.io/v1alpha1 kind: Istio metadata: name: default spec: version: v1.23.0 namespace: istio-system values: global: caAddress: cert-manager-istio-csr.cert-manager.svc:443 pilot: env: ENABLE_CA_SERVER: "false" volumeMounts: - mountPath: /tmp/var/run/secrets/istiod/tls name: istio-csr-dns-cert readOnly: true Note If you installed your CSR agent with a revision based update strategy, then you need to add the following to your Istio object YAML: kind: Istio metadata: name: default spec: updateStrategy: type: RevisionBased Create the Istio resource by running the following command: USD oc apply -f istio.yaml Wait for the Istio object to become ready by running the following command: USD oc wait --for=condition=Ready istios/default -n istio-system 4.2.4. Verifying cert-manager installation You can use the sample httpbin service and sleep application to check communication between the workloads. You can also check the workload certificate of the proxy to verify that the cert-manager tool is installed correctly. Procedure Create the sample namespace by running the following command: USD oc new-project sample Find your active Istio revision by running the following command: USD oc get istiorevisions Add the injection label for your active revision to the sample namespace by running the following command: USD oc label namespace sample istio.io/rev=<your-active-revision-name> --overwrite=true Deploy the sample httpbin service by running the following command: USD oc apply -n sample -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/httpbin/httpbin.yaml Deploy the sample sleep application by running the following command: USD oc apply -n sample -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/sleep/sleep.yaml Wait for both applications to become ready by running the following command: USD oc rollout status -n sample deployment httpbin sleep Verify that sleep application can access the httpbin service by running the following command: USD oc exec "USD(oc get pod -l app=sleep -n sample \ -o jsonpath={.items..metadata.name})" -c sleep -n sample -- \ curl http://httpbin.sample:8000/ip -s -o /dev/null \ -w "%{http_code}\n" Example of a successful output 200 Run the following command to print the workload certificate for the httpbin service and verify the output: USD istioctl proxy-config secret -n sample USD(oc get pods -n sample -o jsonpath='{.items..metadata.name}' --selector app=httpbin) -o json | jq -r '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode | openssl x509 -text -noout Example output ... Issuer: O = cert-manager + O = cluster.local, CN = istio-ca ... X509v3 Subject Alternative Name: URI:spiffe://cluster.local/ns/sample/sa/httpbin 4.3. Updating istio-csr agents with revision-based update strategies If you deployed your Istio resource using the revision based update strategy, you must pass all revisions each time you update your control plane. You must perform the update in the following order: Update the istio-csr deployment with the new revision. Update the value of Istio.spec.version parameter/field. Example update for RevisionBased control plane In this example, the controlplane is being updated from v1.23.0 to 1.23.1. Update the istio-csr deployment with the new revision by running the following command: USD helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \ --wait \ --reuse-values \ --set "app.istio.revisions={<old_revision>,<new_revision>}" where: old_revision Specifies the old revision in the <istio-name>-v<major_version>-<minor_version>-<patch_version> format. For example: default-v1-23-0 . new_revision Specfies the new revision in the <istio-name>-v<major_version>-<minor_version>-<patch_version> format. For example: default-v1-23-1 . Update the istio.spec.version in the Istio object similar to the following example: Example istio.yaml file apiVersion: sailoperator.io/v1alpha1 kind: Istio metadata: name: default spec: version: <new_revision> 1 1 Update to the new revision prefixed with the letter v , such as v1.23.1 Remove the old revision from your istio-csr deployment by running the following command: helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr \ --install \ --namespace cert-manager \ --wait \ --reuse-values \ --set "app.istio.revisions={default-v1-23-1}"
[ "oc create namespace istio-system", "apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned namespace: istio-system spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: istio-ca namespace: istio-system spec: isCA: true duration: 87600h # 10 years secretName: istio-ca commonName: istio-ca privateKey: algorithm: ECDSA size: 256 subject: organizations: - cluster.local - cert-manager issuerRef: name: selfsigned kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: istio-ca namespace: istio-system spec: ca: secretName: istio-ca ---", "oc apply -f issuer.yaml", "oc wait --for=condition=Ready certificates/istio-ca -n istio-system", "oc get -n istio-system secret istio-ca -o jsonpath='{.data.tls\\.crt}' | base64 -d > ca.pem", "oc create secret generic -n cert-manager istio-root-ca --from-file=ca.pem=ca.pem", "helm repo add jetstack https://charts.jetstack.io --force-update", "helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr --install --namespace cert-manager --wait --set \"app.tls.rootCAFile=/var/run/secrets/istio-csr/ca.pem\" --set \"volumeMounts[0].name=root-ca\" --set \"volumeMounts[0].mountPath=/var/run/secrets/istio-csr\" --set \"volumes[0].name=root-ca\" --set \"volumes[0].secret.secretName=istio-root-ca\" --set \"app.istio.namespace=istio-system\"", "helm repo add jetstack https://charts.jetstack.io --force-update", "helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr --install --namespace cert-manager --wait --set \"app.tls.rootCAFile=/var/run/secrets/istio-csr/ca.pem\" --set \"volumeMounts[0].name=root-ca\" --set \"volumeMounts[0].mountPath=/var/run/secrets/istio-csr\" --set \"volumes[0].name=root-ca\" --set \"volumes[0].secret.secretName=istio-root-ca\" --set \"app.istio.namespace=istio-system\" --set \"app.istio.revisions={default-v1-23-0}\"", "apiVersion: sailoperator.io/v1alpha1 kind: Istio metadata: name: default spec: version: v1.23.0 namespace: istio-system values: global: caAddress: cert-manager-istio-csr.cert-manager.svc:443 pilot: env: ENABLE_CA_SERVER: \"false\" volumeMounts: - mountPath: /tmp/var/run/secrets/istiod/tls name: istio-csr-dns-cert readOnly: true", "kind: Istio metadata: name: default spec: updateStrategy: type: RevisionBased", "oc apply -f istio.yaml", "oc wait --for=condition=Ready istios/default -n istio-system", "oc new-project sample", "oc get istiorevisions", "oc label namespace sample istio.io/rev=<your-active-revision-name> --overwrite=true", "oc apply -n sample -f https://raw.githubusercontent.com/openshift-service-mesh/istio/refs/heads/master/samples/httpbin/httpbin.yaml", "oc apply -n sample -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/sleep/sleep.yaml", "oc rollout status -n sample deployment httpbin sleep", "oc exec \"USD(oc get pod -l app=sleep -n sample -o jsonpath={.items..metadata.name})\" -c sleep -n sample -- curl http://httpbin.sample:8000/ip -s -o /dev/null -w \"%{http_code}\\n\"", "200", "istioctl proxy-config secret -n sample USD(oc get pods -n sample -o jsonpath='{.items..metadata.name}' --selector app=httpbin) -o json | jq -r '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode | openssl x509 -text -noout", "Issuer: O = cert-manager + O = cluster.local, CN = istio-ca X509v3 Subject Alternative Name: URI:spiffe://cluster.local/ns/sample/sa/httpbin", "helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr --wait --reuse-values --set \"app.istio.revisions={<old_revision>,<new_revision>}\"", "apiVersion: sailoperator.io/v1alpha1 kind: Istio metadata: name: default spec: version: <new_revision> 1", "helm upgrade cert-manager-istio-csr jetstack/cert-manager-istio-csr --install --namespace cert-manager --wait --reuse-values --set \"app.istio.revisions={default-v1-23-1}\"" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.0.0tp1/html/installing/ossm-cert-manager-assembly
24.4.2. Using the blkid Command
24.4.2. Using the blkid Command The blkid command allows you to display information about available block devices. To do so, type the following at a shell prompt as root : blkid For each listed block device, the blkid command displays available attributes such as its universally unique identifier ( UUID ), file system type ( TYPE ), or volume label ( LABEL ). For example: By default, the blkid command lists all available block devices. To display information about a particular device only, specify the device name on the command line: blkid device_name For instance, to display information about /dev/vda1 , type: You can also use the above command with the -p and -o udev command-line options to obtain more detailed information. Note that root privileges are required to run this command: blkid -po udev device_name For example: For a complete list of available command-line options, see the blkid (8) manual page.
[ "~]# blkid /dev/vda1: UUID=\"7fa9c421-0054-4555-b0ca-b470a97a3d84\" TYPE=\"ext4\" /dev/vda2: UUID=\"7IvYzk-TnnK-oPjf-ipdD-cofz-DXaJ-gPdgBW\" TYPE=\"LVM2_member\" /dev/mapper/vg_kvm-lv_root: UUID=\"a07b967c-71a0-4925-ab02-aebcad2ae824\" TYPE=\"ext4\" /dev/mapper/vg_kvm-lv_swap: UUID=\"d7ef54ca-9c41-4de4-ac1b-4193b0c1ddb6\" TYPE=\"swap\"", "~]# blkid /dev/vda1 /dev/vda1: UUID=\"7fa9c421-0054-4555-b0ca-b470a97a3d84\" TYPE=\"ext4\"", "~]# blkid -po udev /dev/vda1 ID_FS_UUID=7fa9c421-0054-4555-b0ca-b470a97a3d84 ID_FS_UUID_ENC=7fa9c421-0054-4555-b0ca-b470a97a3d84 ID_FS_VERSION=1.0 ID_FS_TYPE=ext4 ID_FS_USAGE=filesystem" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-sysinfo-filesystems-blkid
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/release_notes/making-open-source-more-inclusive
Chapter 2. Resources for troubleshooting automation controller
Chapter 2. Resources for troubleshooting automation controller For information about troubleshooting automation controller, see Troubleshooting automation controller in Configuring automation execution . For information about troubleshooting the performance of automation controller, see Performance troubleshooting for automation controller in Configuring automation execution .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/troubleshooting_ansible_automation_platform/troubleshoot-controller
Part X. Migration
Part X. Migration This part provides recommended practices for migrating deployments from other solutions to Identity Management .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/p.migrating
Chapter 6. Downloading files from your bucket
Chapter 6. Downloading files from your bucket To download a file from your bucket to your workbench, use the download_file() method. Prerequisites You have cloned the odh-doc-examples repository to your workbench. You have opened the s3client_examples.ipynb file in your workbench. You have installed Boto3 and configured an S3 client. Procedure In the notebook, locate the following instructions to download files from a bucket: Modify the code sample: Replace <bucket_name> with the name of the bucket that the file is located in... Replace <object_name> with the name of the file that you want to download. Replace <file_name> with the name and path that you want the file to be downloaded to, as shown in the example. Run the code cell. Verification The file that you downloaded appears in the path that you specified on your workbench.
[ "#Download file from bucket #Replace the following values with your own: #<bucket_name>: The name of the bucket. #<object_name>: The name of the file to download. Must include full path to the file on the bucket. #<file_name>: The name of the file when downloaded. s3_client.download_file('<bucket_name>','<object_name>','<file_name>')", "s3_client.download_file('aqs086-image-registry', 'series35-image36-086.csv', '\\tmp\\series35-image36-086.csv_old')" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_data_in_an_s3-compatible_object_store/downloading-files-from-available-amazon-s3-buckets-using-notebook-cells_s3
Chapter 18. The camel-jbossdatagrid-fuse Quickstart
Chapter 18. The camel-jbossdatagrid-fuse Quickstart This quickstart shows how to use the component described in Section 5.1, "The camel-jbossdatagrid Component" on JBoss Fuse to interact with JBoss Data Grid. This quickstart will deploy two bundles, local_cache_producer and local_cache_consumer , on Fuse, one on each container child1 and child2 respectivity. Below is a description of each of the bundles: local_cache_producer : Scans a folder (/tmp/incoming) for incoming CSV files of the format "id, firstName, lastName, age". If a file is dropped with entries in the given format, each entry is read and transformed into a Person POJO and stored in the data grid. local_cache_consumer : Lets you query for a POJO using a RESTful interface and receive a JSON representation of the Person POJO stored in the data grid for the given key The bundles reside in two different containers; the consumer is able to extract what the producer has put in due to the same configuration being used in the infinispan.xml and jgroups.xml files. The infinispan.xml file defines a REPL (replicated) cache named camel-cache , and both the consumer and producer interact with this cache. Report a bug 18.1. Quickstart Prerequisites The prerequisites for this quickstart are as follows: Java 7.0 (Java SDK 1.7) or better Maven 3.0 or better JBoss Fuse 6.2.0 or better Configure the Maven Repository. For details, see Chapter 3, Install and Use the Maven Repositories Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/camel-jboss_data_grid_quickstart
Chapter 3. Post-deployment IPv6 operations
Chapter 3. Post-deployment IPv6 operations After you deploy the overcloud with IPv6 networking, you must perform some additional configuration. Prerequisites A successful undercloud installation. For more information, see Installing director . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . 3.1. Creating an IPv6 project network on the overcloud The overcloud requires an IPv6-based Project network for instances. Source the overcloudrc file and create an initial Project network in neutron . Prerequisites A successful undercloud installation. For more information, see Installing director . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. Procedure Source the overcloud credentials file: Create a network and subnet: This creates a basic neutron network called default . Verification steps Verify that the network was created successfully: 3.2. Creating an IPv6 public network on the overcloud After you configure the node interfaces to use the External network, you must create this network on the overcloud to enable network access. Prerequisites A successful undercloud installation. For more information, see Installing director . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. Procedure Create an external network and subnet: This command creates a network called public that provides an allocation pool of over 65000 IPv6 addresses for our instances. Create a router to route instance traffic to the External network.
[ "source ~/overcloudrc", "openstack network create default --external --provider-physical-network datacentre --provider-network-type vlan --provider-segment 101 openstack subnet create default --subnet-range 2001:db8:fd00:6000::/64 --ipv6-address-mode slaac --ipv6-ra-mode slaac --ip-version 6 --network default", "openstack network list openstack subnet list", "openstack network create public --external --provider-physical-network datacentre --provider-network-type vlan --provider-segment 100 openstack subnet create public --network public --subnet-range 2001:db8:0:2::/64 --ip-version 6 --gateway 2001:db8::1 --allocation-pool start=2001:db8:0:2::2,end=2001:db8:0:2::ffff --ipv6-address-mode slaac --ipv6-ra-mode slaac", "openstack router create public-router openstack router set public-router --external-gateway public" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/ipv6_networking_for_the_overcloud/assembly_post-deployment-ipv6-operations
15.2.3. Removing a Swap File
15.2.3. Removing a Swap File To remove a swap file: Procedure 15.5. Remove a swap file At a shell prompt, execute the following command to disable the swap file (where /swapfile is the swap file): Remove its entry from the /etc/fstab file. Remove the actual file:
[ "swapoff -v /swapfile", "rm /swapfile" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/swap-removing-file
Chapter 80. JSON Jackson
Chapter 80. JSON Jackson Jackson is a Data Format which uses the Jackson Library from("activemq:My.Queue"). marshal().json(JsonLibrary.Jackson). to("mqseries:Another.Queue"); 80.1. Jackson Options The JSON Jackson dataformat supports 20 options, which are listed below. Name Default Java Type Description objectMapper String Lookup and use the existing ObjectMapper with the given id when using Jackson. useDefaultObjectMapper Boolean Whether to lookup and use default Jackson ObjectMapper from the registry. prettyPrint Boolean To enable pretty printing output nicely formatted. Is by default false. unmarshalType String Class name of the java type to use when unmarshalling. jsonView String When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. include String If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. allowJmsType Boolean Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. collectionType String Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. useList Boolean To unmarshal to a List of Map or a List of Pojo. moduleClassNames String To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. moduleRefs String To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. enableFeatures String Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. disableFeatures String Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. allowUnmarshallType Boolean If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. timezone String If set then Jackson will use the Timezone when marshalling/unmarshalling. This option will have no effect on the others Json DataFormat, like gson, fastjson and xstream. autoDiscoverObjectMapper Boolean If set to true then Jackson will lookup for an objectMapper into the registry. contentTypeHeader Boolean Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. schemaResolver String Optional schema resolver used to lookup schemas for the data in transit. autoDiscoverSchemaResolver Boolean When not disabled, the SchemaResolver will be looked up into the registry. namingStrategy String If set then Jackson will use the the defined Property Naming Strategy.Possible values are: LOWER_CAMEL_CASE, LOWER_DOT_CASE, LOWER_CASE, KEBAB_CASE, SNAKE_CASE and UPPER_CAMEL_CASE. 80.2. Using custom ObjectMapper You can configure JacksonDataFormat to use a custom ObjectMapper in case you need more control of the mapping configuration. If you setup a single ObjectMapper in the registry, then Camel will automatic lookup and use this ObjectMapper . For example if you use Spring Boot, then Spring Boot can provide a default ObjectMapper for you if you have Spring MVC enabled. And this would allow Camel to detect that there is one bean of ObjectMapper class type in the Spring Boot bean registry and then use it. When this happens you should set a INFO logging from Camel. 80.3. Using Jackson for automatic type conversion The camel-jackson module allows integrating Jackson as a Type Converter . This works in a similar way to JAXB that integrates with Camel's type converter. To use this camel-jackson must be enabled, which is done by setting the following options on the CamelContext global options, as shown: @Bean CamelContextConfiguration contextConfiguration() { return new CamelContextConfiguration() { @Override public void beforeApplicationStart(CamelContext context) { // Enable Jackson JSON type converter. context.getGlobalOptions().put(JacksonConstants.ENABLE_TYPE_CONVERTER, "true"); // Allow Jackson JSON to convert to pojo types also // (by default Jackson only converts to String and other simple types) getContext().getGlobalOptions().put(JacksonConstants.TYPE_CONVERTER_TO_POJO, "true"); } @Override public void afterApplicationStart(CamelContext camelContext) { } }; } The camel-jackson type converter integrates with JAXB which means you can annotate POJO class with JAXB annotations that Jackson can use. You can also use Jackson's own annotations on your POJO classes. 80.4. Dependencies To use Jackson in your camel routes you need to add the dependency on camel-jackson which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jackson</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> 80.5. Spring Boot Auto-Configuration When using json-jackson with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jackson-starter</artifactId> </dependency> The component supports 21 options, which are listed below. Name Description Default Type camel.dataformat.json-jackson.allow-jms-type Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. false Boolean camel.dataformat.json-jackson.allow-unmarshall-type If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. false Boolean camel.dataformat.json-jackson.auto-discover-object-mapper If set to true then Jackson will lookup for an objectMapper into the registry. false Boolean camel.dataformat.json-jackson.auto-discover-schema-resolver When not disabled, the SchemaResolver will be looked up into the registry. true Boolean camel.dataformat.json-jackson.collection-type Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. String camel.dataformat.json-jackson.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.json-jackson.disable-features Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.json-jackson.enable-features Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.json-jackson.enabled Whether to enable auto configuration of the json-jackson data format. This is enabled by default. Boolean camel.dataformat.json-jackson.include If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. String camel.dataformat.json-jackson.json-view When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. String camel.dataformat.json-jackson.module-class-names To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. String camel.dataformat.json-jackson.module-refs To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. String camel.dataformat.json-jackson.naming-strategy If set then Jackson will use the the defined Property Naming Strategy.Possible values are: LOWER_CAMEL_CASE, LOWER_DOT_CASE, LOWER_CASE, KEBAB_CASE, SNAKE_CASE and UPPER_CAMEL_CASE. String camel.dataformat.json-jackson.object-mapper Lookup and use the existing ObjectMapper with the given id when using Jackson. String camel.dataformat.json-jackson.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.json-jackson.schema-resolver Optional schema resolver used to lookup schemas for the data in transit. String camel.dataformat.json-jackson.timezone If set then Jackson will use the Timezone when marshalling/unmarshalling. This option will have no effect on the others Json DataFormat, like gson, fastjson and xstream. String camel.dataformat.json-jackson.unmarshal-type Class name of the java type to use when unmarshalling. String camel.dataformat.json-jackson.use-default-object-mapper Whether to lookup and use default Jackson ObjectMapper from the registry. true Boolean camel.dataformat.json-jackson.use-list To unmarshal to a List of Map or a List of Pojo. false Boolean
[ "from(\"activemq:My.Queue\"). marshal().json(JsonLibrary.Jackson). to(\"mqseries:Another.Queue\");", "@Bean CamelContextConfiguration contextConfiguration() { return new CamelContextConfiguration() { @Override public void beforeApplicationStart(CamelContext context) { // Enable Jackson JSON type converter. context.getGlobalOptions().put(JacksonConstants.ENABLE_TYPE_CONVERTER, \"true\"); // Allow Jackson JSON to convert to pojo types also // (by default Jackson only converts to String and other simple types) getContext().getGlobalOptions().put(JacksonConstants.TYPE_CONVERTER_TO_POJO, \"true\"); } @Override public void afterApplicationStart(CamelContext camelContext) { } }; }", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jackson</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jackson-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-json-jackson-dataformat-starter
Chapter 1. Ceph block devices and OpenStack
Chapter 1. Ceph block devices and OpenStack The Red Hat Enterprise Linux OpenStack Platform Director provides two methods for using Ceph as a backend for Glance, Cinder, Cinder Backup and Nova: OpenStack creates the Ceph storage cluster: OpenStack Director can create a Ceph storage cluster. This requires configuring templates for the Ceph OSDs. OpenStack handles the installation and configuration of Ceph hosts. With this scenario, OpenStack will install the Ceph monitors with the OpenStack controller hosts. OpenStack connects to an existing Ceph storage cluster: OpenStack Director, using Red Hat OpenStack Platform 9 and higher, can connect to a Ceph monitor and configure the Ceph storage cluster for use as a backend for OpenStack. The foregoing methods are the preferred methods for configuring Ceph as a backend for OpenStack, because they will handle much of the installation and configuration automatically. This document details the manual procedure for configuring Ceph, QEMU, libvirt and OpenStack to use Ceph as a backend. This document is intended for use for those who do not intend to use the RHEL OSP Director. Note A running Ceph storage cluster and at least one OpenStack host is required to use Ceph block devices as a backend for OpenStack. Three parts of OpenStack integrate with Ceph's block devices: Images: OpenStack Glance manages images for VMs. Images are immutable. OpenStack treats images as binary blobs and downloads them accordingly. Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs, or to attach volumes to running VMs. OpenStack manages volumes using Cinder services. Ceph can serve as a black end for OpenStack Cinder and Cinder Backup. Guest Disks: Guest disks are guest operating system disks. By default, when booting a virtual machine, its disk appears as a file on the file system of the hypervisor, by default, under /var/lib/nova/instances/<uuid>/ directory. OpenStack Glance can store images in a Ceph block device, and can use Cinder to boot a virtual machine using a copy-on-write clone of an image. Important Ceph doesn't support QCOW2 for hosting a virtual machine disk. To boot virtual machines, either ephemeral backend or booting from a volume, the Glance image format must be RAW. OpenStack can use Ceph for images, volumes or guest disks virtual machines. There is no requirement for using all three. Additional Resources See the Red Hat OpenStack Platform documentation for additional details.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/block_device_to_openstack_guide/ceph-block-devices-and-openstack-rbd-osp
function::str_replace
function::str_replace Name function::str_replace - str_replace Replaces all instances of a substring with another Synopsis Arguments prnt_str the string to search and replace in srch_str the substring which is used to search in prnt_str string rplc_str the substring which is used to replace srch_str Description This function returns the given string with substrings replaced.
[ "str_replace:string(prnt_str:string,srch_str:string,rplc_str:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-str-replace
Chapter 1. Introduction to Public-Key Cryptography
Chapter 1. Introduction to Public-Key Cryptography Public-key cryptography and related standards underlie the security features of many products such as signed and encrypted email, single sign-on, and Transport Layer Security/Secure Sockets Layer (SSL/TLS) communications. This chapter covers the basic concepts of public-key cryptography. Internet traffic, which passes information through intermediate computers, can be intercepted by a third party: Eavesdropping Information remains intact, but its privacy is compromised. For example, someone could gather credit card numbers, record a sensitive conversation, or intercept classified information. Tampering Information in transit is changed or replaced and then sent to the recipient. For example, someone could alter an order for goods or change a person's resume. Impersonation Information passes to a person who poses as the intended recipient. Impersonation can take two forms: Spoofing. A person can pretend to be someone else. For example, a person can pretend to have the email address [email protected] or a computer can falsely identify itself as a site called www.example.net . Misrepresentation. A person or organization can misrepresent itself. For example, a site called www.example.net can purport to be an on-line furniture store when it really receives credit-card payments but never sends any goods. Public-key cryptography provides protection against Internet-based attacks through: Encryption and decryption Encryption and decryption allow two communicating parties to disguise information they send to each other. The sender encrypts, or scrambles, information before sending it. The receiver decrypts, or unscrambles, the information after receiving it. While in transit, the encrypted information is unintelligible to an intruder. Tamper detection Tamper detection allows the recipient of information to verify that it has not been modified in transit. Any attempts to modify or substitute data are detected. Authentication Authentication allows the recipient of information to determine its origin by confirming the sender's identity. Nonrepudiation Nonrepudiation prevents the sender of information from claiming at a later date that the information was never sent. 1.1. Encryption and Decryption Encryption is the process of transforming information so it is unintelligible to anyone but the intended recipient. Decryption is the process of decoding encrypted information. A cryptographic algorithm, also called a cipher , is a mathematical function used for encryption or decryption. Usually, two related functions are used, one for encryption and the other for decryption. With most modern cryptography, the ability to keep encrypted information secret is based not on the cryptographic algorithm, which is widely known, but on a number called a key that must be used with the algorithm to produce an encrypted result or to decrypt previously encrypted information. Decryption with the correct key is simple. Decryption without the correct key is very difficult, if not impossible. 1.1.1. Symmetric-Key Encryption With symmetric-key encryption, the encryption key can be calculated from the decryption key and vice versa. With most symmetric algorithms, the same key is used for both encryption and decryption, as shown in Figure 1.1, "Symmetric-Key Encryption" . Figure 1.1. Symmetric-Key Encryption Implementations of symmetric-key encryption can be highly efficient, so that users do not experience any significant time delay as a result of the encryption and decryption. Symmetric-key encryption is effective only if the symmetric key is kept secret by the two parties involved. If anyone else discovers the key, it affects both confidentiality and authentication. A person with an unauthorized symmetric key not only can decrypt messages sent with that key, but can encrypt new messages and send them as if they came from one of the legitimate parties using the key. Symmetric-key encryption plays an important role in SSL/TLS communication, which is widely used for authentication, tamper detection, and encryption over TCP/IP networks. SSL/TLS also uses techniques of public-key encryption, which is described in the section. 1.1.2. Public-Key Encryption Public-key encryption (also called asymmetric encryption) involves a pair of keys, a public key and a private key, associated with an entity. Each public key is published, and the corresponding private key is kept secret. (For more information about the way public keys are published, see Section 1.3, "Certificates and Authentication" .) Data encrypted with a public key can be decrypted only with the corresponding private key. Figure 1.2, "Public-Key Encryption" shows a simplified view of the way public-key encryption works. Figure 1.2. Public-Key Encryption The scheme shown in Figure 1.2, "Public-Key Encryption" allows public keys to be freely distributed, while only authorized people are able to read data encrypted using this key. In general, to send encrypted data, the data is encrypted with that person's public key, and the person receiving the encrypted data decrypts it with the corresponding private key. Compared with symmetric-key encryption, public-key encryption requires more processing and may not be feasible for encrypting and decrypting large amounts of data. However, it is possible to use public-key encryption to send a symmetric key, which can then be used to encrypt additional data. This is the approach used by the SSL/TLS protocols. The reverse of the scheme shown in Figure 1.2, "Public-Key Encryption" also works: data encrypted with a private key can be decrypted only with the corresponding public key. This is not a recommended practice to encrypt sensitive data, however, because it means that anyone with the public key, which is by definition published, could decrypt the data. Nevertheless, private-key encryption is useful because it means the private key can be used to sign data with a digital signature, an important requirement for electronic commerce and other commercial applications of cryptography. Client software such as Mozilla Firefox can then use the public key to confirm that the message was signed with the appropriate private key and that it has not been tampered with since being signed. Section 1.2, "Digital Signatures" illustrates how this confirmation process works. 1.1.3. Key Length and Encryption Strength Breaking an encryption algorithm is finding the key to the access the encrypted data in plain text. For symmetric algorithms, breaking the algorithm usually means trying to determine the key used to encrypt the text. For a public key algorithm, breaking the algorithm usually means acquiring the shared secret information between two recipients. One method of breaking a symmetric algorithm is to simply try every key within the full algorithm until the right key is found. For public key algorithms, since half of the key pair is publicly known, the other half (private key) can be derived using published, though complex, mathematical calculations. Manually finding the key to break an algorithm is called a brute force attack. Breaking an algorithm introduces the risk of intercepting, or even impersonating and fraudulently verifying, private information. The key strength of an algorithm is determined by finding the fastest method to break the algorithm and comparing it to a brute force attack. For symmetric keys, encryption strength is often described in terms of the size or length of the keys used to perform the encryption: longer keys generally provide stronger encryption. Key length is measured in bits. An encryption key is considered full strength if the best known attack to break the key is no faster than a brute force attempt to test every key possibility. Different types of algorithms - particularly public key algorithms - may require different key lengths to achieve the same level of encryption strength as a symmetric-key cipher. The RSA cipher can use only a subset of all possible values for a key of a given length, due to the nature of the mathematical problem on which it is based. Other ciphers, such as those used for symmetric-key encryption, can use all possible values for a key of a given length. More possible matching options means more security. Because it is relatively trivial to break an RSA key, an RSA public-key encryption cipher must have a very long key - at least 2048 bits - to be considered cryptographically strong. On the other hand, symmetric-key ciphers are reckoned to be equivalently strong using a much shorter key length, as little as 80 bits for most algorithms. Similarly, public-key ciphers based on the elliptic curve cryptography (ECC), such as the Elliptic Curve Digital Signature Algorithm (ECDSA) ciphers, also require less bits than RSA ciphers.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/introduction_to_public_key_cryptography
Chapter 6. Installing a cluster on OpenStack with Kuryr on your own infrastructure
Chapter 6. Installing a cluster on OpenStack with Kuryr on your own infrastructure Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. In OpenShift Container Platform version 4.14, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.14 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 6.2. About Kuryr SDN Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 6.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 6.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Server groups 2 - plus 1 for each additional availability zone in each machine pool Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 6.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 6.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 6.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. 6.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. 6.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Kuryr SDN does not support automatic unidling by a service. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. 6.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 6.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 6.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 6.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.5. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 6.6. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.14 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 6.10. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 6.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 6.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 6.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 6.12. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 6.13. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. Additional resources Installation configuration parameters for OpenStack 6.13.1. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 6.13.2. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OVN-Kubernetes network plugin, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType . This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 The cluster network plugin to install. The supported values are Kuryr , OVNKubernetes , and OpenShiftSDN . The default value is OVNKubernetes . 3 4 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 6.13.3. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 6.13.3.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 6.13.3.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 6.13.4. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 6.13.5. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 6.13.6. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 6.13.7. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 6.13.8. Modifying the network type By default, the installation program selects the OpenShiftSDN network type. To use Kuryr instead, change the value in the installation configuration file that the program generated. Prerequisites You have the file install-config.yaml that was generated by the OpenShift Container Platform installation program Procedure In a command prompt, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["networkType"] = "Kuryr"; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set networking.networkType to "Kuryr" . 6.14. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines, compute machine sets, and control plane machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 6.15. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 6.16. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 6.17. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform postinstallation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" 6.18. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 6.19. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 6.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.21. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 6.22. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 6.23. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 6.24. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 6.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses .
[ "sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>", "(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml", "- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787", "(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml", "openstack loadbalancer provider list", "+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+", "sudo subscription-manager register # If not done already", "sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already", "sudo subscription-manager repos --disable=* # If not done already", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack", "sudo alternatives --set python /usr/bin/python3", "xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-containers.yaml'", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "file <name_of_downloaded_file>", "openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"bootstrap machine\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-network-03-config.yml 1", "ls <installation_directory>/manifests/cluster-network-*", "cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"networkType\"] = \"Kuryr\"; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "export INFRA_ID=USD(jq -r .infraID metadata.json)", "import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)", "openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>", "openstack image show <image_name>", "openstack catalog show image", "openstack token issue -c id -f value", "{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }", "for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done", "# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'", "# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'", "ansible-playbook -i inventory.yaml security-groups.yaml", "ansible-playbook -i inventory.yaml network.yaml", "openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"", "ansible-playbook -i inventory.yaml bootstrap.yaml", "openstack console log show \"USDINFRA_ID-bootstrap\"", "ansible-playbook -i inventory.yaml control-plane.yaml", "openshift-install wait-for bootstrap-complete", "INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ansible-playbook -i inventory.yaml down-bootstrap.yaml", "ansible-playbook -i inventory.yaml compute-nodes.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "openshift-install --log-level debug wait-for install-complete" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_openstack/installing-openstack-user-kuryr
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_ruby_client/making-open-source-more-inclusive
Release notes for Eclipse Temurin 11.0.16
Release notes for Eclipse Temurin 11.0.16 Red Hat build of OpenJDK 11 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.16/index
Chapter 3. MachineAutoscaler [autoscaling.openshift.io/v1beta1]
Chapter 3. MachineAutoscaler [autoscaling.openshift.io/v1beta1] Description MachineAutoscaler is the Schema for the machineautoscalers API Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of constraints of a scalable resource status object Most recently observed status of a scalable resource 3.1.1. .spec Description Specification of constraints of a scalable resource Type object Required maxReplicas minReplicas scaleTargetRef Property Type Description maxReplicas integer MaxReplicas constrains the maximal number of replicas of a scalable resource minReplicas integer MinReplicas constrains the minimal number of replicas of a scalable resource scaleTargetRef object ScaleTargetRef holds reference to a scalable resource 3.1.2. .spec.scaleTargetRef Description ScaleTargetRef holds reference to a scalable resource Type object Required kind name Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name specifies a name of an object, e.g. worker-us-east-1a. Scalable resources are expected to exist under a single namespace. 3.1.3. .status Description Most recently observed status of a scalable resource Type object Property Type Description lastTargetRef object LastTargetRef holds reference to the recently observed scalable resource 3.1.4. .status.lastTargetRef Description LastTargetRef holds reference to the recently observed scalable resource Type object Required kind name Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name specifies a name of an object, e.g. worker-us-east-1a. Scalable resources are expected to exist under a single namespace. 3.2. API endpoints The following API endpoints are available: /apis/autoscaling.openshift.io/v1beta1/machineautoscalers GET : list objects of kind MachineAutoscaler /apis/autoscaling.openshift.io/v1beta1/namespaces/{namespace}/machineautoscalers DELETE : delete collection of MachineAutoscaler GET : list objects of kind MachineAutoscaler POST : create a MachineAutoscaler /apis/autoscaling.openshift.io/v1beta1/namespaces/{namespace}/machineautoscalers/{name} DELETE : delete a MachineAutoscaler GET : read the specified MachineAutoscaler PATCH : partially update the specified MachineAutoscaler PUT : replace the specified MachineAutoscaler /apis/autoscaling.openshift.io/v1beta1/namespaces/{namespace}/machineautoscalers/{name}/status GET : read status of the specified MachineAutoscaler PATCH : partially update status of the specified MachineAutoscaler PUT : replace status of the specified MachineAutoscaler 3.2.1. /apis/autoscaling.openshift.io/v1beta1/machineautoscalers Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind MachineAutoscaler Table 3.2. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscalerList schema 401 - Unauthorized Empty 3.2.2. /apis/autoscaling.openshift.io/v1beta1/namespaces/{namespace}/machineautoscalers Table 3.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of MachineAutoscaler Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind MachineAutoscaler Table 3.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.8. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscalerList schema 401 - Unauthorized Empty HTTP method POST Description create a MachineAutoscaler Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. Body parameters Parameter Type Description body MachineAutoscaler schema Table 3.11. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 201 - Created MachineAutoscaler schema 202 - Accepted MachineAutoscaler schema 401 - Unauthorized Empty 3.2.3. /apis/autoscaling.openshift.io/v1beta1/namespaces/{namespace}/machineautoscalers/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the MachineAutoscaler namespace string object name and auth scope, such as for teams and projects Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a MachineAutoscaler Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MachineAutoscaler Table 3.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.18. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MachineAutoscaler Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.20. Body parameters Parameter Type Description body Patch schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MachineAutoscaler Table 3.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.23. Body parameters Parameter Type Description body MachineAutoscaler schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 201 - Created MachineAutoscaler schema 401 - Unauthorized Empty 3.2.4. /apis/autoscaling.openshift.io/v1beta1/namespaces/{namespace}/machineautoscalers/{name}/status Table 3.25. Global path parameters Parameter Type Description name string name of the MachineAutoscaler namespace string object name and auth scope, such as for teams and projects Table 3.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified MachineAutoscaler Table 3.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.28. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified MachineAutoscaler Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.30. Body parameters Parameter Type Description body Patch schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified MachineAutoscaler Table 3.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.33. Body parameters Parameter Type Description body MachineAutoscaler schema Table 3.34. HTTP responses HTTP code Reponse body 200 - OK MachineAutoscaler schema 201 - Created MachineAutoscaler schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/autoscale_apis/machineautoscaler-autoscaling-openshift-io-v1beta1
Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator
Chapter 3. Automatically scaling pods with the Custom Metrics Autoscaler Operator 3.1. Release notes 3.1.1. Custom Metrics Autoscaler Operator release notes The release notes for the Custom Metrics Autoscaler Operator for Red Hat OpenShift describe new features and enhancements, deprecated features, and known issues. The Custom Metrics Autoscaler Operator uses the Kubernetes-based Event Driven Autoscaler (KEDA) and is built on top of the OpenShift Container Platform horizontal pod autoscaler (HPA). Note The Custom Metrics Autoscaler Operator for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. 3.1.1.1. Supported versions The following table defines the Custom Metrics Autoscaler Operator versions for each OpenShift Container Platform version. Version OpenShift Container Platform version General availability 2.14.1 4.16 General availability 2.14.1 4.15 General availability 2.14.1 4.14 General availability 2.14.1 4.13 General availability 2.14.1 4.12 General availability 3.1.1.2. Custom Metrics Autoscaler Operator 2.14.1-467 release notes This release of the Custom Metrics Autoscaler Operator 2.14.1-467 provides a CVE and a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:7348 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.1.2.1. Bug fixes Previously, the root file system of the Custom Metrics Autoscaler Operator pod was writable, which is unnecessary and could present security issues. This update makes the pod root file system read-only, which addresses the potential security issue. ( OCPBUGS-37989 ) 3.1.2. Release notes for past releases of the Custom Metrics Autoscaler Operator The following release notes are for versions of the Custom Metrics Autoscaler Operator. For the current version, see Custom Metrics Autoscaler Operator release notes . 3.1.2.1. Custom Metrics Autoscaler Operator 2.14.1-454 release notes This release of the Custom Metrics Autoscaler Operator 2.14.1-454 provides a CVE, a new feature, and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:5865 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.1.1. New features and enhancements 3.1.2.1.1.1. Support for the Cron trigger with the Custom Metrics Autoscaler Operator The Custom Metrics Autoscaler Operator can now use the Cron trigger to scale pods based on an hourly schedule. When your specified time frame starts, the Custom Metrics Autoscaler Operator scales pods to your desired amount. When the time frame ends, the Operator scales back down to the level. For more information, see Understanding the Cron trigger . 3.1.2.1.2. Bug fixes Previously, if you made changes to audit configuration parameters in the KedaController custom resource, the keda-metrics-server-audit-policy config map would not get updated. As a consequence, you could not change the audit configuration parameters after the initial deployment of the Custom Metrics Autoscaler. With this fix, changes to the audit configuration now render properly in the config map, allowing you to change the audit configuration any time after installation. ( OCPBUGS-32521 ) 3.1.2.2. Custom Metrics Autoscaler Operator 2.13.1 release notes This release of the Custom Metrics Autoscaler Operator 2.13.1-421 provides a new feature and a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:4837 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.2.1. New features and enhancements 3.1.2.2.1.1. Support for custom certificates with the Custom Metrics Autoscaler Operator The Custom Metrics Autoscaler Operator can now use custom service CA certificates to connect securely to TLS-enabled metrics sources, such as an external Kafka cluster or an external Prometheus service. By default, the Operator uses automatically-generated service certificates to connect to on-cluster services only. There is a new field in the KedaController object that allows you to load custom server CA certificates for connecting to external services by using config maps. For more information, see Custom CA certificates for the Custom Metrics Autoscaler . 3.1.2.2.2. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. Scaled objects containing cron triggers are currently not supported for the custom metrics autoscaler. ( OCPBUGS-34018 ) 3.1.2.3. Custom Metrics Autoscaler Operator 2.12.1-394 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-394 provides a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:2901 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of Kubernetes-based Event Driven Autoscaler (KEDA). 3.1.2.3.1. Bug fixes Previously, the protojson.Unmarshal function entered into an infinite loop when unmarshaling certain forms of invalid JSON. This condition could occur when unmarshaling into a message that contains a google.protobuf.Any value or when the UnmarshalOptions.DiscardUnknown option is set. This release fixes this issue. ( OCPBUGS-30305 ) Previously, when parsing a multipart form, either explicitly with the Request.ParseMultipartForm method or implicitly with the Request.FormValue , Request.PostFormValue , or Request.FormFile method, the limits on the total size of the parsed form were not applied to the memory consumed. This could cause memory exhaustion. With this fix, the parsing process now correctly limits the maximum size of form lines while reading a single form line. ( OCPBUGS-30360 ) Previously, when following an HTTP redirect to a domain that is not on a matching subdomain or on an exact match of the initial domain, an HTTP client would not forward sensitive headers, such as Authorization or Cookie . For example, a redirect from example.com to www.example.com would forward the Authorization header, but a redirect to www.example.org would not forward the header. This release fixes this issue. ( OCPBUGS-30365 ) Previously, verifying a certificate chain that contains a certificate with an unknown public key algorithm caused the certificate verification process to panic. This condition affected all crypto and Transport Layer Security (TLS) clients and servers that set the Config.ClientAuth parameter to the VerifyClientCertIfGiven or RequireAndVerifyClientCert value. The default behavior is for TLS servers to not verify client certificates. This release fixes this issue. ( OCPBUGS-30370 ) Previously, if errors returned from the MarshalJSON method contained user-controlled data, an attacker could have used the data to break the contextual auto-escaping behavior of the HTML template package. This condition would allow for subsequent actions to inject unexpected content into the templates. This release fixes this issue. ( OCPBUGS-30397 ) Previously, the net/http and golang.org/x/net/http2 Go packages did not limit the number of CONTINUATION frames for an HTTP/2 request. This condition could result in excessive CPU consumption. This release fixes this issue. ( OCPBUGS-30894 ) 3.1.2.4. Custom Metrics Autoscaler Operator 2.12.1-384 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-384 provides a bug fix for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHBA-2024:2043 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.4.1. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images were missing time zone information. As a consequence, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds are updated to include time zone information. As a result, scaled objects containing cron triggers now function properly. ( OCPBUGS-32395 ) 3.1.2.5. Custom Metrics Autoscaler Operator 2.12.1-376 release notes This release of the Custom Metrics Autoscaler Operator 2.12.1-376 provides security updates and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2024:1812 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.5.1. Bug fixes Previously, if invalid values such as nonexistent namespaces were specified in scaled object metadata, the underlying scaler clients would not free, or close, their client descriptors, resulting in a slow memory leak. This fix properly closes the underlying client descriptors when there are errors, preventing memory from leaking. ( OCPBUGS-30145 ) Previously the ServiceMonitor custom resource (CR) for the keda-metrics-apiserver pod was not functioning, because the CR referenced an incorrect metrics port name of http . This fix corrects the ServiceMonitor CR to reference the proper port name of metrics . As a result, the Service Monitor functions properly. ( OCPBUGS-25806 ) 3.1.2.6. Custom Metrics Autoscaler Operator 2.11.2-322 release notes This release of the Custom Metrics Autoscaler Operator 2.11.2-322 provides security updates and bug fixes for running the Operator in an OpenShift Container Platform cluster. The following advisory is available for the RHSA-2023:6144 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.6.1. Bug fixes Because the Custom Metrics Autoscaler Operator version 3.11.2-311 was released without a required volume mount in the Operator deployment, the Custom Metrics Autoscaler Operator pod would restart every 15 minutes. This fix adds the required volume mount to the Operator deployment. As a result, the Operator no longer restarts every 15 minutes. ( OCPBUGS-22361 ) 3.1.2.7. Custom Metrics Autoscaler Operator 2.11.2-311 release notes This release of the Custom Metrics Autoscaler Operator 2.11.2-311 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.11.2-311 were released in RHBA-2023:5981 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.7.1. New features and enhancements 3.1.2.7.1.1. Red Hat OpenShift Service on AWS (ROSA) and OpenShift Dedicated are now supported The Custom Metrics Autoscaler Operator 2.11.2-311 can be installed on OpenShift ROSA and OpenShift Dedicated managed clusters. versions of the Custom Metrics Autoscaler Operator could be installed only in the openshift-keda namespace. This prevented the Operator from being installed on OpenShift ROSA and OpenShift Dedicated clusters. This version of Custom Metrics Autoscaler allows installation to other namespaces such as openshift-operators or keda , enabling installation into ROSA and Dedicated clusters. 3.1.2.7.2. Bug fixes Previously, if the Custom Metrics Autoscaler Operator was installed and configured, but not in use, the OpenShift CLI reported the couldn't get resource list for external.metrics.k8s.io/v1beta1: Got empty response for: external.metrics.k8s.io/v1beta1 error after any oc command was entered. The message, although harmless, could have caused confusion. With this fix, the Got empty response for: external.metrics... error no longer appears inappropriately. ( OCPBUGS-15779 ) Previously, any annotation or label change to objects managed by the Custom Metrics Autoscaler were reverted by Custom Metrics Autoscaler Operator any time the Keda Controller was modified, for example after a configuration change. This caused continuous changing of labels in your objects. The Custom Metrics Autoscaler now uses its own annotation to manage labels and annotations, and annotation or label are no longer inappropriately reverted. ( OCPBUGS-15590 ) 3.1.2.8. Custom Metrics Autoscaler Operator 2.10.1-267 release notes This release of the Custom Metrics Autoscaler Operator 2.10.1-267 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1-267 were released in RHBA-2023:4089 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.8.1. Bug fixes Previously, the custom-metrics-autoscaler and custom-metrics-autoscaler-adapter images did not contain time zone information. Because of this, scaled objects with cron triggers failed to work because the controllers were unable to find time zone information. With this fix, the image builds now include time zone information. As a result, scaled objects containing cron triggers now function properly. ( OCPBUGS-15264 ) Previously, the Custom Metrics Autoscaler Operator would attempt to take ownership of all managed objects, including objects in other namespaces and cluster-scoped objects. Because of this, the Custom Metrics Autoscaler Operator was unable to create the role binding for reading the credentials necessary to be an API server. This caused errors in the kube-system namespace. With this fix, the Custom Metrics Autoscaler Operator skips adding the ownerReference field to any object in another namespace or any cluster-scoped object. As a result, the role binding is now created without any errors. ( OCPBUGS-15038 ) Previously, the Custom Metrics Autoscaler Operator added an ownerReferences field to the openshift-keda namespace. While this did not cause functionality problems, the presence of this field could have caused confusion for cluster administrators. With this fix, the Custom Metrics Autoscaler Operator does not add the ownerReference field to the openshift-keda namespace. As a result, the openshift-keda namespace no longer has a superfluous ownerReference field. ( OCPBUGS-15293 ) Previously, if you used a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter was set to none , the trigger would fail to scale. With this fix, the Custom Metrics Autoscaler for OpenShift now properly handles the none pod identity provider type. As a result, a Prometheus trigger configured with authentication method other than pod identity, and the podIdentity parameter sset to none now properly scales. ( OCPBUGS-15274 ) 3.1.2.9. Custom Metrics Autoscaler Operator 2.10.1 release notes This release of the Custom Metrics Autoscaler Operator 2.10.1 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.10.1 were released in RHEA-2023:3199 . Important Before installing this version of the Custom Metrics Autoscaler Operator, remove any previously installed Technology Preview versions or the community-supported version of KEDA. 3.1.2.9.1. New features and enhancements 3.1.2.9.1.1. Custom Metrics Autoscaler Operator general availability The Custom Metrics Autoscaler Operator is now generally available as of Custom Metrics Autoscaler Operator version 2.10.1. Important Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1.2.9.1.2. Performance metrics You can now use the Prometheus Query Language (PromQL) to query metrics on the Custom Metrics Autoscaler Operator. 3.1.2.9.1.3. Pausing the custom metrics autoscaling for scaled objects You can now pause the autoscaling of a scaled object, as needed, and resume autoscaling when ready. 3.1.2.9.1.4. Replica fall back for scaled objects You can now specify the number of replicas to fall back to if a scaled object fails to get metrics from the source. 3.1.2.9.1.5. Customizable HPA naming for scaled objects You can now specify a custom name for the horizontal pod autoscaler in scaled objects. 3.1.2.9.1.6. Activation and scaling thresholds Because the horizontal pod autoscaler (HPA) cannot scale to or from 0 replicas, the Custom Metrics Autoscaler Operator does that scaling, after which the HPA performs the scaling. You can now specify when the HPA takes over autoscaling, based on the number of replicas. This allows for more flexibility with your scaling policies. 3.1.2.10. Custom Metrics Autoscaler Operator 2.8.2-174 release notes This release of the Custom Metrics Autoscaler Operator 2.8.2-174 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2-174 were released in RHEA-2023:1683 . Important The Custom Metrics Autoscaler Operator version 2.8.2-174 is a Technology Preview feature. 3.1.2.10.1. New features and enhancements 3.1.2.10.1.1. Operator upgrade support You can now upgrade from a prior version of the Custom Metrics Autoscaler Operator. See "Changing the update channel for an Operator" in the "Additional resources" for information on upgrading an Operator. 3.1.2.10.1.2. must-gather support You can now collect data about the Custom Metrics Autoscaler Operator and its components by using the OpenShift Container Platform must-gather tool. Currently, the process for using the must-gather tool with the Custom Metrics Autoscaler is different than for other operators. See "Gathering debugging data in the "Additional resources" for more information. 3.1.2.11. Custom Metrics Autoscaler Operator 2.8.2 release notes This release of the Custom Metrics Autoscaler Operator 2.8.2 provides new features and bug fixes for running the Operator in an OpenShift Container Platform cluster. The components of the Custom Metrics Autoscaler Operator 2.8.2 were released in RHSA-2023:1042 . Important The Custom Metrics Autoscaler Operator version 2.8.2 is a Technology Preview feature. 3.1.2.11.1. New features and enhancements 3.1.2.11.1.1. Audit Logging You can now gather and view audit logs for the Custom Metrics Autoscaler Operator and its associated components. Audit logs are security-relevant chronological sets of records that document the sequence of activities that have affected the system by individual users, administrators, or other components of the system. 3.1.2.11.1.2. Scale applications based on Apache Kafka metrics You can now use the KEDA Apache kafka trigger/scaler to scale deployments based on an Apache Kafka topic. 3.1.2.11.1.3. Scale applications based on CPU metrics You can now use the KEDA CPU trigger/scaler to scale deployments based on CPU metrics. 3.1.2.11.1.4. Scale applications based on memory metrics You can now use the KEDA memory trigger/scaler to scale deployments based on memory metrics. 3.2. Custom Metrics Autoscaler Operator overview As a developer, you can use Custom Metrics Autoscaler Operator for Red Hat OpenShift to specify how OpenShift Container Platform should automatically increase or decrease the number of pods for a deployment, stateful set, custom resource, or job based on custom metrics that are not based only on CPU or memory. The Custom Metrics Autoscaler Operator is an optional Operator, based on the Kubernetes Event Driven Autoscaler (KEDA), that allows workloads to be scaled using additional metrics sources other than pod metrics. The custom metrics autoscaler currently supports only the Prometheus, CPU, memory, and Apache Kafka metrics. The Custom Metrics Autoscaler Operator scales your pods up and down based on custom, external metrics from specific applications. Your other applications continue to use other scaling methods. You configure triggers , also known as scalers, which are the source of events and metrics that the custom metrics autoscaler uses to determine how to scale. The custom metrics autoscaler uses a metrics API to convert the external metrics to a form that OpenShift Container Platform can use. The custom metrics autoscaler creates a horizontal pod autoscaler (HPA) that performs the actual scaling. To use the custom metrics autoscaler, you create a ScaledObject or ScaledJob object for a workload, which is a custom resource (CR) that defines the scaling metadata. You specify the deployment or job to scale, the source of the metrics to scale on (trigger), and other parameters such as the minimum and maximum replica counts allowed. Note You can create only one scaled object or scaled job for each workload that you want to scale. Also, you cannot use a scaled object or scaled job and the horizontal pod autoscaler (HPA) on the same workload. The custom metrics autoscaler, unlike the HPA, can scale to zero. If you set the minReplicaCount value in the custom metrics autoscaler CR to 0 , the custom metrics autoscaler scales the workload down from 1 to 0 replicas to or up from 0 replicas to 1. This is known as the activation phase . After scaling up to 1 replica, the HPA takes control of the scaling. This is known as the scaling phase . Some triggers allow you to change the number of replicas that are scaled by the cluster metrics autoscaler. In all cases, the parameter to configure the activation phase always uses the same phrase, prefixed with activation . For example, if the threshold parameter configures scaling, activationThreshold would configure activation. Configuring the activation and scaling phases allows you more flexibility with your scaling policies. For example, you can configure a higher activation phase to prevent scaling up or down if the metric is particularly low. The activation value has more priority than the scaling value in case of different decisions for each. For example, if the threshold is set to 10 , and the activationThreshold is 50 , if the metric reports 40 , the scaler is not active and the pods are scaled to zero even if the HPA requires 4 instances. Figure 3.1. Custom metrics autoscaler workflow You create or modify a scaled object custom resource for a workload on a cluster. The object contains the scaling configuration for that workload. Prior to accepting the new object, the OpenShift API server sends it to the custom metrics autoscaler admission webhooks process to ensure that the object is valid. If validation succeeds, the API server persists the object. The custom metrics autoscaler controller watches for new or modified scaled objects. When the OpenShift API server notifies the controller of a change, the controller monitors any external trigger sources, also known as data sources, that are specified in the object for changes to the metrics data. One or more scalers request scaling data from the external trigger source. For example, for a Kafka trigger type, the controller uses the Kafka scaler to communicate with a Kafka instance to obtain the data requested by the trigger. The controller creates a horizontal pod autoscaler object for the scaled object. As a result, the Horizontal Pod Autoscaler (HPA) Operator starts monitoring the scaling data associated with the trigger. The HPA requests scaling data from the cluster OpenShift API server endpoint. The OpenShift API server endpoint is served by the custom metrics autoscaler metrics adapter. When the metrics adapter receives a request for custom metrics, it uses a GRPC connection to the controller to request it for the most recent trigger data received from the scaler. The HPA makes scaling decisions based upon the data received from the metrics adapter and scales the workload up or down by increasing or decreasing the replicas. As a it operates, a workload can affect the scaling metrics. For example, if a workload is scaled up to handle work in a Kafka queue, the queue size decreases after the workload processes all the work. As a result, the workload is scaled down. If the metrics are in a range specified by the minReplicaCount value, the custom metrics autoscaler controller disables all scaling, and leaves the replica count at a fixed level. If the metrics exceed that range, the custom metrics autoscaler controller enables scaling and allows the HPA to scale the workload. While scaling is disabled, the HPA does not take any action. 3.2.1. Custom CA certificates for the Custom Metrics Autoscaler By default, the Custom Metrics Autoscaler Operator uses automatically-generated service CA certificates to connect to on-cluster services. If you want to use off-cluster services that require custom CA certificates, you can add the required certificates to a config map. Then, add the config map to the KedaController custom resource as described in Installing the custom metrics autoscaler . The Operator loads those certificates on start-up and registers them as trusted by the Operator. The config maps can contain one or more certificate files that contain one or more PEM-encoded CA certificates. Or, you can use separate config maps for each certificate file. Note If you later update the config map to add additional certificates, you must restart the keda-operator-* pod for the changes to take effect. 3.3. Installing the custom metrics autoscaler You can use the OpenShift Container Platform web console to install the Custom Metrics Autoscaler Operator. The installation creates the following five CRDs: ClusterTriggerAuthentication KedaController ScaledJob ScaledObject TriggerAuthentication 3.3.1. Installing the custom metrics autoscaler You can use the following procedure to install the Custom Metrics Autoscaler Operator. Prerequisites Remove any previously-installed Technology Preview versions of the Cluster Metrics Autoscaler Operator. Remove any versions of the community-based KEDA. Also, remove the KEDA 1.x custom resource definitions by running the following commands: USD oc delete crd scaledobjects.keda.k8s.io USD oc delete crd triggerauthentications.keda.k8s.io Optional: If you need the Custom Metrics Autoscaler Operator to connect to off-cluster services, such as an external Kafka cluster or an external Prometheus service, put any required service CA certificates into a config map. The config map must exist in the same namespace where the Operator is installed. For example: USD oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Custom Metrics Autoscaler from the list of available Operators, and click Install . On the Install Operator page, ensure that the All namespaces on the cluster (default) option is selected for Installation Mode . This installs the Operator in all namespaces. Ensure that the openshift-keda namespace is selected for Installed Namespace . OpenShift Container Platform creates the namespace, if not present in your cluster. Click Install . Verify the installation by listing the Custom Metrics Autoscaler Operator components: Navigate to Workloads Pods . Select the openshift-keda project from the drop-down menu and verify that the custom-metrics-autoscaler-operator-* pod is running. Navigate to Workloads Deployments to verify that the custom-metrics-autoscaler-operator deployment is running. Optional: Verify the installation in the OpenShift CLI using the following commands: USD oc get all -n openshift-keda The output appears similar to the following: Example output NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m Install the KedaController custom resource, which creates the required CRDs: In the OpenShift Container Platform web console, click Operators Installed Operators . Click Custom Metrics Autoscaler . On the Operator Details page, click the KedaController tab. On the KedaController tab, click Create KedaController and edit the file. kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: "json" logOutputVolumeClaim: "persistentVolumeClaimName" policy: rules: - level: Metadata omitStages: ["RequestReceived"] omitManagedFields: false lifetime: maxAge: "2" maxBackup: "1" maxSize: "50" serviceAccount: {} 1 Specifies a single namespace in which the Custom Metrics Autoscaler Operator should scale applications. Leave it blank or leave it empty to scale applications in all namespaces. This field should have a namespace or be empty. The default value is empty. 2 Specifies the level of verbosity for the Custom Metrics Autoscaler Operator log messages. The allowed values are debug , info , error . The default is info . 3 Specifies the logging format for the Custom Metrics Autoscaler Operator log messages. The allowed values are console or json . The default is console . 4 Optional: Specifies one or more config maps with CA certificates, which the Custom Metrics Autoscaler Operator can use to connect securely to TLS-enabled metrics sources. 5 Specifies the logging level for the Custom Metrics Autoscaler Metrics Server. The allowed values are 0 for info and 4 for debug . The default is 0 . 6 Activates audit logging for the Custom Metrics Autoscaler Operator and specifies the audit policy to use, as described in the "Configuring audit logging" section. Click Create to create the KEDA controller. 3.4. Understanding custom metrics autoscaler triggers Triggers, also known as scalers, provide the metrics that the Custom Metrics Autoscaler Operator uses to scale your pods. The custom metrics autoscaler currently supports the Prometheus, CPU, memory, Apache Kafka, and cron triggers. You use a ScaledObject or ScaledJob custom resource to configure triggers for specific objects, as described in the sections that follow. You can configure a certificate authority to use with your scaled objects or for all scalers in the cluster . 3.4.1. Understanding the Prometheus trigger You can scale pods based on Prometheus metrics, which can use the installed OpenShift Container Platform monitoring or an external Prometheus server as the metrics source. See "Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring" for information on the configurations required to use the OpenShift Container Platform monitoring as a source for metrics. Note If Prometheus is collecting metrics from the application that the custom metrics autoscaler is scaling, do not set the minimum replicas to 0 in the custom resource. If there are no application pods, the custom metrics autoscaler does not have any metrics to scale on. Example scaled object with a Prometheus target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: # ... triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job="test-app"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: "false" 9 unsafeSsl: "false" 10 1 Specifies Prometheus as the trigger type. 2 Specifies the address of the Prometheus server. This example uses OpenShift Container Platform monitoring. 3 Optional: Specifies the namespace of the object you want to scale. This parameter is mandatory if using OpenShift Container Platform monitoring as a source for the metrics. 4 Specifies the name to identify the metric in the external.metrics.k8s.io API. If you are using more than one trigger, all metric names must be unique. 5 Specifies the value that triggers scaling. Must be specified as a quoted string value. 6 Specifies the Prometheus query to use. 7 Specifies the authentication method to use. Prometheus scalers support bearer authentication ( bearer ), basic authentication ( basic ), or TLS authentication ( tls ). You configure the specific authentication parameters in a trigger authentication, as discussed in a following section. As needed, you can also use a secret. 8 Optional: Passes the X-Scope-OrgID header to multi-tenant Cortex or Mimir storage for Prometheus. This parameter is required only with multi-tenant Prometheus storage, to indicate which data Prometheus should return. 9 Optional: Specifies how the trigger should proceed if the Prometheus target is lost. If true , the trigger continues to operate if the Prometheus target is lost. This is the default behavior. If false , the trigger returns an error if the Prometheus target is lost. 10 Optional: Specifies whether the certificate check should be skipped. For example, you might skip the check if you are running in a test environment and using self-signed certificates at the Prometheus endpoint. If false , the certificate check is performed. This is the default behavior. If true , the certificate check is not performed. Important Skipping the check is not recommended. 3.4.1.1. Configuring the custom metrics autoscaler to use OpenShift Container Platform monitoring You can use the installed OpenShift Container Platform Prometheus monitoring as a source for the metrics used by the custom metrics autoscaler. However, there are some additional configurations you must perform. For your scaled objects to be able to read the OpenShift Container Platform Prometheus metrics, you must use a trigger authentication or a cluster trigger authentication in order to provide the authentication information required. The following procedure differs depending on which trigger authentication method you use. For more information on trigger authentications, see "Understanding custom metrics autoscaler trigger authentications". Note These steps are not required for an external Prometheus source. You must perform the following tasks, as described in this section: Create a service account. Create a secret that generates a token for the service account. Create the trigger authentication. Create a role. Add that role to the service account. Reference the token in the trigger authentication object used by Prometheus. Prerequisites OpenShift Container Platform monitoring must be installed. Monitoring of user-defined workloads must be enabled in OpenShift Container Platform monitoring, as described in the Creating a user-defined workload monitoring config map section. The Custom Metrics Autoscaler Operator must be installed. Procedure Change to the appropriate project: USD oc project <project_name> 1 1 Specifies one of the following projects: If you are using a trigger authentication, specify the project with the object you want to scale. If you are using a cluster trigger authentication, specify the openshift-keda project. Create a service account and token, if your cluster does not have one: Create a service account object by using the following command: USD oc create serviceaccount thanos 1 1 Specifies the name of the service account. Create a secret YAML to generate a service account token: apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token 1 Specifies the name of the service account. Create the secret object by using the following command: USD oc create -f <file_name>.yaml Use the following command to locate the token assigned to the service account: USD oc describe serviceaccount thanos 1 1 Specifies the name of the service account. Example output Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none> 1 Use this token in the trigger authentication. Create a trigger authentication with the service account token: Create a YAML file similar to the following: apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt 1 Specifies one of the following trigger authentication methods: If you are using a trigger authentication, specify TriggerAuthentication . This example configures a trigger authentication. If you are using a cluster trigger authentication, specify ClusterTriggerAuthentication . 2 Specifies that this object uses a secret for authorization. 3 Specifies the authentication parameter to supply by using the token. 4 Specifies the name of the token to use. 5 Specifies the key in the token to use with the specified parameter. Create the CR object: USD oc create -f <file-name>.yaml Create a role for reading Thanos metrics: Create a YAML file with the following parameters: apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch Create the CR object: USD oc create -f <file-name>.yaml Create a role binding for reading Thanos metrics: Create a YAML file similar to the following: apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5 1 Specifies one of the following object types: If you are using a trigger authentication, specify RoleBinding . If you are using a cluster trigger authentication, specify ClusterRoleBinding . 2 Specifies the name of the role you created. 3 Specifies one of the following projects: If you are using a trigger authentication, specify the project with the object you want to scale. If you are using a cluster trigger authentication, specify the openshift-keda project. 4 Specifies the name of the service account to bind to the role. 5 Specifies the project where you previously created the service account. Create the CR object: USD oc create -f <file-name>.yaml You can now deploy a scaled object or scaled job to enable autoscaling for your application, as described in "Understanding how to add custom metrics autoscalers". To use OpenShift Container Platform monitoring as the source, in the trigger, or scaler, you must include the following parameters: triggers.type must be prometheus triggers.metadata.serverAddress must be https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 triggers.metadata.authModes must be bearer triggers.metadata.namespace must be set to the namespace of the object to scale triggers.authenticationRef must point to the trigger authentication resource specified in the step Additional resources Understanding custom metrics autoscaler trigger authentications 3.4.2. Understanding the CPU trigger You can scale pods based on CPU metrics. This trigger uses cluster metrics as the source for metrics. The custom metrics autoscaler scales the pods associated with an object to maintain the CPU usage that you specify. The autoscaler increases or decreases the number of replicas between the minimum and maximum numbers to maintain the specified CPU utilization across all pods. The memory trigger considers the memory utilization of the entire pod. If the pod has multiple containers, the memory trigger considers the total memory utilization of all containers in the pod. Note This trigger cannot be used with the ScaledJob custom resource. When using a memory trigger to scale an object, the object does not scale to 0 , even if you are using multiple triggers. Example scaled object with a CPU target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: # ... triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4 1 Specifies CPU as the trigger type. 2 Specifies the type of metric to use, either Utilization or AverageValue . 3 Specifies the value that triggers scaling. Must be specified as a quoted string value. When using Utilization , the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods. When using AverageValue , the target value is the average of the metrics across all relevant pods. 4 Specifies the minimum number of replicas when scaling down. For a CPU trigger, enter a value of 1 or greater, because the HPA cannot scale to zero if you are using only CPU metrics. 3.4.3. Understanding the memory trigger You can scale pods based on memory metrics. This trigger uses cluster metrics as the source for metrics. The custom metrics autoscaler scales the pods associated with an object to maintain the average memory usage that you specify. The autoscaler increases and decreases the number of replicas between the minimum and maximum numbers to maintain the specified memory utilization across all pods. The memory trigger considers the memory utilization of entire pod. If the pod has multiple containers, the memory utilization is the sum of all of the containers. Note This trigger cannot be used with the ScaledJob custom resource. When using a memory trigger to scale an object, the object does not scale to 0 , even if you are using multiple triggers. Example scaled object with a memory target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: # ... triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4 1 Specifies memory as the trigger type. 2 Specifies the type of metric to use, either Utilization or AverageValue . 3 Specifies the value that triggers scaling. Must be specified as a quoted string value. When using Utilization , the target value is the average of the resource metrics across all relevant pods, represented as a percentage of the requested value of the resource for the pods. When using AverageValue , the target value is the average of the metrics across all relevant pods. 4 Optional: Specifies an individual container to scale, based on the memory utilization of only that container, rather than the entire pod. In this example, only the container named api is to be scaled. 3.4.4. Understanding the Kafka trigger You can scale pods based on an Apache Kafka topic or other services that support the Kafka protocol. The custom metrics autoscaler does not scale higher than the number of Kafka partitions, unless you set the allowIdleConsumers parameter to true in the scaled object or scaled job. Note If the number of consumer groups exceeds the number of partitions in a topic, the extra consumer groups remain idle. To avoid this, by default the number of replicas does not exceed: The number of partitions on a topic, if a topic is specified The number of partitions of all topics in the consumer group, if no topic is specified The maxReplicaCount specified in scaled object or scaled job CR You can use the allowIdleConsumers parameter to disable these default behaviors. Example scaled object with a Kafka target apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: # ... triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13 1 Specifies Kafka as the trigger type. 2 Specifies the name of the Kafka topic on which Kafka is processing the offset lag. 3 Specifies a comma-separated list of Kafka brokers to connect to. 4 Specifies the name of the Kafka consumer group used for checking the offset on the topic and processing the related lag. 5 Optional: Specifies the average target value that triggers scaling. Must be specified as a quoted string value. The default is 5 . 6 Optional: Specifies the target value for the activation phase. Must be specified as a quoted string value. 7 Optional: Specifies the Kafka offset reset policy for the Kafka consumer. The available values are: latest and earliest . The default is latest . 8 Optional: Specifies whether the number of Kafka replicas can exceed the number of partitions on a topic. If true , the number of Kafka replicas can exceed the number of partitions on a topic. This allows for idle Kafka consumers. If false , the number of Kafka replicas cannot exceed the number of partitions on a topic. This is the default. 9 Specifies how the trigger behaves when a Kafka partition does not have a valid offset. If true , the consumers are scaled to zero for that partition. If false , the scaler keeps a single consumer for that partition. This is the default. 10 Optional: Specifies whether the trigger includes or excludes partition lag for partitions whose current offset is the same as the current offset of the polling cycle. If true , the scaler excludes partition lag in these partitions. If false , the trigger includes all consumer lag in all partitions. This is the default. 11 Optional: Specifies the version of your Kafka brokers. Must be specified as a quoted string value. The default is 1.0.0 . 12 Optional: Specifies a comma-separated list of partition IDs to scope the scaling on. If set, only the listed IDs are considered when calculating lag. Must be specified as a quoted string value. The default is to consider all partitions. 13 Optional: Specifies whether to use TSL client authentication for Kafka. The default is disable . For information on configuring TLS, see "Understanding custom metrics autoscaler trigger authentications". 3.4.5. Understanding the Cron trigger You can scale pods based on a time range. When the time range starts, the custom metrics autoscaler scales the pods associated with an object from the configured minimum number of pods to the specified number of desired pods. At the end of the time range, the pods are scaled back to the configured minimum. The time period must be configured in cron format . The following example scales the pods associated with this scaled object from 0 to 100 from 6:00 AM to 6:30 PM India Standard Time. Example scaled object with a Cron trigger apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: "0 6 * * *" 5 end: "30 18 * * *" 6 desiredReplicas: "100" 7 1 Specifies the minimum number of pods to scale down to at the end of the time frame. 2 Specifies the maximum number of replicas when scaling up. This value should be the same as desiredReplicas . The default is 100 . 3 Specifies a Cron trigger. 4 Specifies the timezone for the time frame. This value must be from the IANA Time Zone Database . 5 Specifies the start of the time frame. 6 Specifies the end of the time frame. 7 Specifies the number of pods to scale to between the start and end of the time frame. This value should be the same as maxReplicaCount . 3.5. Understanding custom metrics autoscaler trigger authentications A trigger authentication allows you to include authentication information in a scaled object or a scaled job that can be used by the associated containers. You can use trigger authentications to pass OpenShift Container Platform secrets, platform-native pod authentication mechanisms, environment variables, and so on. You define a TriggerAuthentication object in the same namespace as the object that you want to scale. That trigger authentication can be used only by objects in that namespace. Alternatively, to share credentials between objects in multiple namespaces, you can create a ClusterTriggerAuthentication object that can be used across all namespaces. Trigger authentications and cluster trigger authentication use the same configuration. However, a cluster trigger authentication requires an additional kind parameter in the authentication reference of the scaled object. Example secret for Basic authentication apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: "dXNlcm5hbWU=" 1 password: "cGFzc3dvcmQ=" 1 User name and password to supply to the trigger authentication. The values in a data stanza must be base-64 encoded. Example trigger authentication using a secret for Basic authentication kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the authentication parameter to supply by using the secret. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. Example cluster trigger authentication with a secret for Basic authentication kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password 1 Note that no namespace is used with a cluster trigger authentication. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the authentication parameter to supply by using the secret. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. Example secret with certificate authority (CA) details apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t... 1 Specifies the TLS CA Certificate for authentication of the metrics endpoint. The value must be base-64 encoded. 2 Specifies the TLS certificates and key for TLS client authentication. The values must be base-64 encoded. Example trigger authentication using a secret for CA details kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the type of authentication to use. 4 Specifies the name of the secret to use. 5 Specifies the key in the secret to use with the specified parameter. 6 Specifies the authentication parameter for a custom CA when connecting to the metrics endpoint. 7 Specifies the name of the secret to use. 8 Specifies the key in the secret to use with the specified parameter. Example secret with a bearer token apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV" 1 1 Specifies a bearer token to use with bearer authentication. The value in a data stanza must be base-64 encoded. Example trigger authentication with a bearer token kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a secret for authorization when connecting to the metrics endpoint. 3 Specifies the type of authentication to use. 4 Specifies the name of the secret to use. 5 Specifies the key in the token to use with the specified parameter. Example trigger authentication with an environment variable kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses environment variables for authorization when connecting to the metrics endpoint. 3 Specify the parameter to set with this variable. 4 Specify the name of the environment variable. 5 Optional: Specify a container that requires authentication. The container must be in the same resource as referenced by scaleTargetRef in the scaled object. Example trigger authentication with pod authentication providers kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3 1 Specifies the namespace of the object you want to scale. 2 Specifies that this trigger authentication uses a platform-native pod authentication when connecting to the metrics endpoint. 3 Specifies a pod identity. Supported values are none , azure , gcp , aws-eks , or aws-kiam . The default is none . Additional resources For information about OpenShift Container Platform secrets, see Providing sensitive data to pods . 3.5.1. Using trigger authentications You use trigger authentications and cluster trigger authentications by using a custom resource to create the authentication, then add a reference to a scaled object or scaled job. Prerequisites The Custom Metrics Autoscaler Operator must be installed. If you are using a secret, the Secret object must exist, for example: Example secret apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD> Procedure Create the TriggerAuthentication or ClusterTriggerAuthentication object. Create a YAML file that defines the object: Example trigger authentication with a secret kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD Create the TriggerAuthentication object: USD oc create -f <filename>.yaml Create or edit a ScaledObject YAML file that uses the trigger authentication: Create a YAML file that defines the object by running the following command: Example scaled object with a trigger authentication apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "basic" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2 1 Specify the name of your trigger authentication object. 2 Specify TriggerAuthentication . TriggerAuthentication is the default. Example scaled object with a cluster trigger authentication apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "basic" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2 1 Specify the name of your trigger authentication object. 2 Specify ClusterTriggerAuthentication . Create the scaled object by running the following command: USD oc apply -f <filename> 3.6. Pausing the custom metrics autoscaler for a scaled object You can pause and restart the autoscaling of a workload, as needed. For example, you might want to pause autoscaling before performing cluster maintenance or to avoid resource starvation by removing non-mission-critical workloads. 3.6.1. Pausing a custom metrics autoscaler You can pause the autoscaling of a scaled object by adding the autoscaling.keda.sh/paused-replicas annotation to the custom metrics autoscaler for that scaled object. The custom metrics autoscaler scales the replicas for that workload to the specified value and pauses autoscaling until the annotation is removed. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" # ... Procedure Use the following command to edit the ScaledObject CR for your workload: USD oc edit ScaledObject scaledobject Add the autoscaling.keda.sh/paused-replicas annotation with any value: apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" 1 creationTimestamp: "2023-02-08T14:41:01Z" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0 1 Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling. 3.6.2. Restarting the custom metrics autoscaler for a scaled object You can restart a paused custom metrics autoscaler by removing the autoscaling.keda.sh/paused-replicas annotation for that ScaledObject . apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" # ... Procedure Use the following command to edit the ScaledObject CR for your workload: USD oc edit ScaledObject scaledobject Remove the autoscaling.keda.sh/paused-replicas annotation. apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "4" 1 creationTimestamp: "2023-02-08T14:41:01Z" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0 1 Remove this annotation to restart a paused custom metrics autoscaler. 3.7. Gathering audit logs You can gather audit logs, which are a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. For example, audit logs can help you understand where an autoscaling request is coming from. This is key information when backends are getting overloaded by autoscaling requests made by user applications and you need to determine which is the troublesome application. 3.7.1. Configuring audit logging You can configure auditing for the Custom Metrics Autoscaler Operator by editing the KedaController custom resource. The logs are sent to an audit log file on a volume that is secured by using a persistent volume claim in the KedaController CR. Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure Edit the KedaController custom resource to add the auditConfig stanza: kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: # ... metricsServer: # ... auditConfig: logFormat: "json" 1 logOutputVolumeClaim: "pvc-audit-log" 2 policy: rules: 3 - level: Metadata omitStages: "RequestReceived" 4 omitManagedFields: false 5 lifetime: 6 maxAge: "2" maxBackup: "1" maxSize: "50" 1 Specifies the output format of the audit log, either legacy or json . 2 Specifies an existing persistent volume claim for storing the log data. All requests coming to the API server are logged to this persistent volume claim. If you leave this field empty, the log data is sent to stdout. 3 Specifies which events should be recorded and what data they should include: None : Do not log events. Metadata : Log only the metadata for the request, such as user, timestamp, and so forth. Do not log the request text and the response text. This is the default. Request : Log only the metadata and the request text but not the response text. This option does not apply for non-resource requests. RequestResponse : Log event metadata, request text, and response text. This option does not apply for non-resource requests. 4 Specifies stages for which no event is created. 5 Specifies whether to omit the managed fields of the request and response bodies from being written to the API audit log, either true to omit the fields or false to include the fields. 6 Specifies the size and lifespan of the audit logs. maxAge : The maximum number of days to retain audit log files, based on the timestamp encoded in their filename. maxBackup : The maximum number of audit log files to retain. Set to 0 to retain all audit log files. maxSize : The maximum size in megabytes of an audit log file before it gets rotated. Verification View the audit log file directly: Obtain the name of the keda-metrics-apiserver-* pod: oc get pod -n openshift-keda Example output NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s View the log data by using a command similar to the following: USD oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1 1 Optional: You can use the grep command to specify the log level to display: Metadata , Request , RequestResponse . For example: USD oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata Example output ... {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"4c81d41b-3dab-4675-90ce-20b87ce24013","stage":"ResponseComplete","requestURI":"/healthz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["10.131.0.1"],"userAgent":"kube-probe/1.28","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2023-02-16T13:00:03.554567Z","stageTimestamp":"2023-02-16T13:00:03.555032Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} ... Alternatively, you can view a specific log: Use a command similar to the following to log into the keda-metrics-apiserver-* pod: USD oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda For example: USD oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda Change to the /var/audit-policy/ directory: sh-4.4USD cd /var/audit-policy/ List the available logs: sh-4.4USD ls Example output log-2023.02.17-14:50 policy.yaml View the log, as needed: sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1 1 Optional: You can use the grep command to specify the log level to display: Metadata , Request , RequestResponse . For example: sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request Example output 3.8. Gathering debugging data When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. To help troubleshoot your issue, provide the following information: Data gathered using the must-gather tool. The unique cluster ID. You can use the must-gather tool to collect data about the Custom Metrics Autoscaler Operator and its components, including the following items: The openshift-keda namespace and its child objects. The Custom Metric Autoscaler Operator installation objects. The Custom Metric Autoscaler Operator CRD objects. 3.8.1. Gathering debugging data The following command runs the must-gather tool for the Custom Metrics Autoscaler Operator: USD oc adm must-gather --image="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" Note The standard OpenShift Container Platform must-gather command, oc adm must-gather , does not collect Custom Metrics Autoscaler Operator data. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Note If your cluster is using a restricted network, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters on restricted networks, you must import the default must-gather image as an image stream by running the following command. USD oc import-image is/must-gather -n openshift Perform one of the following: To get only the Custom Metrics Autoscaler Operator must-gather data, use the following command: USD oc adm must-gather --image="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" The custom image for the must-gather command is pulled directly from the Operator package manifests, so that it works on any cluster where the Custom Metric Autoscaler Operator is available. To gather the default must-gather data in addition to the Custom Metric Autoscaler Operator information: Use the following command to obtain the Custom Metrics Autoscaler Operator image and set it as an environment variable: USD IMAGE="USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator \ -n openshift-marketplace \ -o jsonpath='{.status.channels[?(@.name=="stable")].currentCSVDesc.annotations.containerImage}')" Use the oc adm must-gather with the Custom Metrics Autoscaler Operator image: USD oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE} Example 3.1. Example must-gather output for the Custom Metric Autoscaler └── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── .insecure.log │ │ └── .log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the Red Hat Customer Portal . 3.9. Viewing Operator metrics The Custom Metrics Autoscaler Operator exposes ready-to-use metrics that it pulls from the on-cluster monitoring component. You can query the metrics by using the Prometheus Query Language (PromQL) to analyze and diagnose issues. All metrics are reset when the controller pod restarts. 3.9.1. Accessing performance metrics You can access the metrics and run queries by using the OpenShift Container Platform web console. Procedure Select the Administrator perspective in the OpenShift Container Platform web console. Select Observe Metrics . To create a custom query, add your PromQL query to the Expression field. To add multiple queries, select Add Query . 3.9.1.1. Provided Operator metrics The Custom Metrics Autoscaler Operator exposes the following metrics, which you can view by using the OpenShift Container Platform web console. Table 3.1. Custom Metric Autoscaler Operator metrics Metric name Description keda_scaler_activity Whether the particular scaler is active or inactive. A value of 1 indicates the scaler is active; a value of 0 indicates the scaler is inactive. keda_scaler_metrics_value The current value for each scaler's metric, which is used by the Horizontal Pod Autoscaler (HPA) in computing the target average. keda_scaler_metrics_latency The latency of retrieving the current metric from each scaler. keda_scaler_errors The number of errors that have occurred for each scaler. keda_scaler_errors_total The total number of errors encountered for all scalers. keda_scaled_object_errors The number of errors that have occurred for each scaled obejct. keda_resource_totals The total number of Custom Metrics Autoscaler custom resources in each namespace for each custom resource type. keda_trigger_totals The total number of triggers by trigger type. Custom Metrics Autoscaler Admission webhook metrics The Custom Metrics Autoscaler Admission webhook also exposes the following Prometheus metrics. Metric name Description keda_scaled_object_validation_total The number of scaled object validations. keda_scaled_object_validation_errors The number of validation errors. 3.10. Understanding how to add custom metrics autoscalers To add a custom metrics autoscaler, create a ScaledObject custom resource for a deployment, stateful set, or custom resource. Create a ScaledJob custom resource for a job. You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload. 3.10.1. Adding a custom metrics autoscaler to a workload You can create a custom metrics autoscaler for a workload that is created by a Deployment , StatefulSet , or custom resource object. Prerequisites The Custom Metrics Autoscaler Operator must be installed. If you use a custom metrics autoscaler for scaling based on CPU or memory: Your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with CPU and Memory displayed under Usage. USD oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Example output Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none> The pods associated with the object you want to scale must include specified memory and CPU limits. For example: Example pod spec apiVersion: v1 kind: Pod # ... spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: "128Mi" cpu: "500m" # ... Procedure Create a YAML file similar to the following. Only the name <2> , object name <4> , and object kind <5> are required: Example scaled object apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: "0" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: "json" logOutputVolumeClaim: "persistentVolumeClaimName" policy: rules: - level: Metadata omitStages: "RequestReceived" omitManagedFields: false lifetime: maxAge: "2" maxBackup: "1" maxSize: "50" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication 1 Optional: Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling, as described in the "Pausing the custom metrics autoscaler for a workload" section. 2 Specifies a name for this custom metrics autoscaler. 3 Optional: Specifies the API version of the target resource. The default is apps/v1 . 4 Specifies the name of the object that you want to scale. 5 Specifies the kind as Deployment , StatefulSet or CustomResource . 6 Optional: Specifies the name of the container in the target resource, from which the custom metrics autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0] . 7 Optional. Specifies the period in seconds to wait after the last trigger is reported before scaling the deployment back to 0 if the minReplicaCount is set to 0 . The default is 300 . 8 Optional: Specifies the maximum number of replicas when scaling up. The default is 100 . 9 Optional: Specifies the minimum number of replicas when scaling down. 10 Optional: Specifies the parameters for audit logs. as described in the "Configuring audit logging" section. 11 Optional: Specifies the number of replicas to fall back to if a scaler fails to get metrics from the source for the number of times defined by the failureThreshold parameter. For more information on fallback behavior, see the KEDA documentation . 12 Optional: Specifies the interval in seconds to check each trigger on. The default is 30 . 13 Optional: Specifies whether to scale back the target resource to the original replica count after the scaled object is deleted. The default is false , which keeps the replica count as it is when the scaled object is deleted. 14 Optional: Specifies a name for the horizontal pod autoscaler. The default is keda-hpa-{scaled-object-name} . 15 Optional: Specifies a scaling policy to use to control the rate to scale pods up or down, as described in the "Scaling policies" section. 16 Specifies the trigger to use as the basis for scaling, as described in the "Understanding the custom metrics autoscaler triggers" section. This example uses OpenShift Container Platform monitoring. 17 Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section. Enter TriggerAuthentication to use a trigger authentication. This is the default. Enter ClusterTriggerAuthentication to use a cluster trigger authentication. Create the custom metrics autoscaler by running the following command: USD oc create -f <filename>.yaml Verification View the command output to verify that the custom metrics autoscaler was created: USD oc get scaledobject <scaled_object_name> Example output NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s Note the following fields in the output: TRIGGERS : Indicates the trigger, or scaler, that is being used. AUTHENTICATION : Indicates the name of any trigger authentication being used. READY : Indicates whether the scaled object is ready to start scaling: If True , the scaled object is ready. If False , the scaled object is not ready because of a problem in one or more of the objects you created. ACTIVE : Indicates whether scaling is taking place: If True , scaling is taking place. If False , scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created. FALLBACK : Indicates whether the custom metrics autoscaler is able to get metrics from the source If False , the custom metrics autoscaler is getting metrics. If True , the custom metrics autoscaler is getting metrics because there are no metrics or there is a problem in one or more of the objects you created. 3.10.2. Adding a custom metrics autoscaler to a job You can create a custom metrics autoscaler for any Job object. Important Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure Create a YAML file similar to the following: kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: "custom" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: "0.5" pendingPodConditions: - "Ready" - "PodScheduled" - "AnyOtherCustomPodCondition" multipleScalersCalculation : "max" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job="test-app"}[1m])) authModes: "bearer" authenticationRef: 14 name: prom-cluster-triggerauthentication 1 Specifies the maximum duration the job can run. 2 Specifies the number of retries for a job. The default is 6 . 3 Optional: Specifies how many pod replicas a job should run in parallel; defaults to 1 . For non-parallel jobs, leave unset. When unset, the default is 1 . 4 Optional: Specifies how many successful pod completions are needed to mark a job completed. For non-parallel jobs, leave unset. When unset, the default is 1 . For parallel jobs with a fixed completion count, specify the number of completions. For parallel jobs with a work queue, leave unset. When unset the default is the value of the parallelism parameter. 5 Specifies the template for the pod the controller creates. 6 Optional: Specifies the maximum number of replicas when scaling up. The default is 100 . 7 Optional: Specifies the interval in seconds to check each trigger on. The default is 30 . 8 Optional: Specifies the number of successful finished jobs should be kept. The default is 100 . 9 Optional: Specifies how many failed jobs should be kept. The default is 100 . 10 Optional: Specifies the name of the container in the target resource, from which the custom autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0] . 11 Optional: Specifies whether existing jobs are terminated whenever a scaled job is being updated: default : The autoscaler terminates an existing job if its associated scaled job is updated. The autoscaler recreates the job with the latest specs. gradual : The autoscaler does not terminate an existing job if its associated scaled job is updated. The autoscaler creates new jobs with the latest specs. 12 Optional: Specifies a scaling strategy: default , custom , or accurate . The default is default . For more information, see the link in the "Additional resources" section that follows. 13 Specifies the trigger to use as the basis for scaling, as described in the "Understanding the custom metrics autoscaler triggers" section. 14 Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section. Enter TriggerAuthentication to use a trigger authentication. This is the default. Enter ClusterTriggerAuthentication to use a cluster trigger authentication. Create the custom metrics autoscaler by running the following command: USD oc create -f <filename>.yaml Verification View the command output to verify that the custom metrics autoscaler was created: USD oc get scaledjob <scaled_job_name> Example output NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s Note the following fields in the output: TRIGGERS : Indicates the trigger, or scaler, that is being used. AUTHENTICATION : Indicates the name of any trigger authentication being used. READY : Indicates whether the scaled object is ready to start scaling: If True , the scaled object is ready. If False , the scaled object is not ready because of a problem in one or more of the objects you created. ACTIVE : Indicates whether scaling is taking place: If True , scaling is taking place. If False , scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created. 3.10.3. Additional resources Understanding custom metrics autoscaler trigger authentications 3.11. Removing the Custom Metrics Autoscaler Operator You can remove the custom metrics autoscaler from your OpenShift Container Platform cluster. After removing the Custom Metrics Autoscaler Operator, remove other components associated with the Operator to avoid potential issues. Note Delete the KedaController custom resource (CR) first. If you do not delete the KedaController CR, OpenShift Container Platform can hang when you delete the openshift-keda project. If you delete the Custom Metrics Autoscaler Operator before deleting the CR, you are not able to delete the CR. 3.11.1. Uninstalling the Custom Metrics Autoscaler Operator Use the following procedure to remove the custom metrics autoscaler from your OpenShift Container Platform cluster. Prerequisites The Custom Metrics Autoscaler Operator must be installed. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Switch to the openshift-keda project. Remove the KedaController custom resource. Find the CustomMetricsAutoscaler Operator and click the KedaController tab. Find the custom resource, and then click Delete KedaController . Click Uninstall . Remove the Custom Metrics Autoscaler Operator: Click Operators Installed Operators . Find the CustomMetricsAutoscaler Operator and click the Options menu and select Uninstall Operator . Click Uninstall . Optional: Use the OpenShift CLI to remove the custom metrics autoscaler components: Delete the custom metrics autoscaler CRDs: clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh USD oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh Deleting the CRDs removes the associated roles, cluster roles, and role bindings. However, there might be a few cluster roles that must be manually deleted. List any custom metrics autoscaler cluster roles: USD oc get clusterrole | grep keda.sh Delete the listed custom metrics autoscaler cluster roles. For example: USD oc delete clusterrole.keda.sh-v1alpha1-admin List any custom metrics autoscaler cluster role bindings: USD oc get clusterrolebinding | grep keda.sh Delete the listed custom metrics autoscaler cluster role bindings. For example: USD oc delete clusterrolebinding.keda.sh-v1alpha1-admin Delete the custom metrics autoscaler project: USD oc delete project openshift-keda Delete the Cluster Metric Autoscaler Operator: USD oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda
[ "oc delete crd scaledobjects.keda.k8s.io", "oc delete crd triggerauthentications.keda.k8s.io", "oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem", "oc get all -n openshift-keda", "NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m", "kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: [\"RequestReceived\"] omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" serviceAccount: {}", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: \"false\" 9 unsafeSsl: \"false\" 10", "oc project <project_name> 1", "oc create serviceaccount thanos 1", "apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token", "oc create -f <file_name>.yaml", "oc describe serviceaccount thanos 1", "Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none>", "apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt", "oc create -f <file-name>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - \"\" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch", "oc create -f <file-name>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5", "oc create -f <file-name>.yaml", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: \"0 6 * * *\" 5 end: \"30 18 * * *\" 6 desiredReplicas: \"100\" 7", "apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: \"dXNlcm5hbWU=\" 1 password: \"cGFzc3dvcmQ=\"", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password", "kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password", "apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8", "apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV\" 1", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3", "apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD>", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD", "oc create -f <filename>.yaml", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2", "oc apply -f <filename>", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"", "oc edit ScaledObject scaledobject", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"", "oc edit ScaledObject scaledobject", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0", "kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: metricsServer: auditConfig: logFormat: \"json\" 1 logOutputVolumeClaim: \"pvc-audit-log\" 2 policy: rules: 3 - level: Metadata omitStages: \"RequestReceived\" 4 omitManagedFields: false 5 lifetime: 6 maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\"", "get pod -n openshift-keda", "NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s", "oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1", "oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"4c81d41b-3dab-4675-90ce-20b87ce24013\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/healthz\",\"verb\":\"get\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.131.0.1\"],\"userAgent\":\"kube-probe/1.28\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2023-02-16T13:00:03.554567Z\",\"stageTimestamp\":\"2023-02-16T13:00:03.555032Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}", "oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda", "oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda", "sh-4.4USD cd /var/audit-policy/", "sh-4.4USD ls", "log-2023.02.17-14:50 policy.yaml", "sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1", "sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Request\",\"auditID\":\"63e7f68c-04ec-4f4d-8749-bf1656572a41\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/openapi/v2\",\"verb\":\"get\",\"user\":{\"username\":\"system:aggregator\",\"groups\":[\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.1\"],\"responseStatus\":{\"metadata\":{},\"code\":304},\"requestReceivedTimestamp\":\"2023-02-17T13:12:55.035478Z\",\"stageTimestamp\":\"2023-02-17T13:12:55.038346Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:discovery\\\" of ClusterRole \\\"system:discovery\\\" to Group \\\"system:authenticated\\\"\"}}", "oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "oc import-image is/must-gather -n openshift", "oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "IMAGE=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE}", "└── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── previous.insecure.log │ │ └── previous.log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "apiVersion: v1 kind: Pod spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: \"128Mi\" cpu: \"500m\"", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"0\" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: \"RequestReceived\" omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication", "oc create -f <filename>.yaml", "oc get scaledobject <scaled_object_name>", "NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s", "kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: \"custom\" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: \"0.5\" pendingPodConditions: - \"Ready\" - \"PodScheduled\" - \"AnyOtherCustomPodCondition\" multipleScalersCalculation : \"max\" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"bearer\" authenticationRef: 14 name: prom-cluster-triggerauthentication", "oc create -f <filename>.yaml", "oc get scaledjob <scaled_job_name>", "NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s", "oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh", "oc get clusterrole | grep keda.sh", "oc delete clusterrole.keda.sh-v1alpha1-admin", "oc get clusterrolebinding | grep keda.sh", "oc delete clusterrolebinding.keda.sh-v1alpha1-admin", "oc delete project openshift-keda", "oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/nodes/automatically-scaling-pods-with-the-custom-metrics-autoscaler-operator
Using Service Interconnect
Using Service Interconnect Red Hat Service Interconnect 1.8 Creating a service network with the CLI and YAML
null
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/using_service_interconnect/index
Installing on-premise with Assisted Installer
Installing on-premise with Assisted Installer OpenShift Container Platform 4.15 Installing OpenShift Container Platform on-premise with the Assisted Installer Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on-premise_with_assisted_installer/index
Interactively installing RHEL over the network
Interactively installing RHEL over the network Red Hat Enterprise Linux 8 Installing RHEL on several systems using network resources or on a headless system with the graphical installer Red Hat Customer Content Services
[ "yum install nfs-utils", "/ exported_directory / clients", "/rhel8-install *", "systemctl start nfs-server.service", "systemctl reload nfs-server.service", "mkdir /mnt/rhel8-install/", "mount -o loop,ro -t iso9660 /image_directory/image.iso /mnt/rhel8-install/", "cp -r /mnt/rhel8-install/ /var/www/html/", "systemctl start httpd.service", "systemctl enable firewalld", "systemctl start firewalld", "firewall-cmd --add-port min_port - max_port /tcp --permanent firewall-cmd --add-service ftp --permanent", "firewall-cmd --reload", "mkdir /mnt/rhel8-install", "mount -o loop,ro -t iso9660 /image-directory/image.iso /mnt/rhel8-install", "mkdir /var/ftp/rhel8-install cp -r /mnt/rhel8-install/ /var/ftp/", "restorecon -r /var/ftp/rhel8-install find /var/ftp/rhel8-install -type f -exec chmod 444 {} \\; find /var/ftp/rhel8-install -type d -exec chmod 755 {} \\;", "systemctl start vsftpd.service", "systemctl restart vsftpd.service", "systemctl enable vsftpd", "install dhcp-server", "option architecture-type code 93 = unsigned integer 16; subnet 192.168.124.0 netmask 255.255.255.0 { option routers 192.168.124.1 ; option domain-name-servers 192.168.124.1 ; range 192.168.124.100 192.168.124.200 ; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.124.2 ; if option architecture-type = 00:07 { filename \"redhat/EFI/BOOT/BOOTX64.EFI\"; } else { filename \"pxelinux/pxelinux.0\"; } } class \"httpclients\" { match if substring (option vendor-class-identifier, 0, 10) = \"HTTPClient\"; option vendor-class-identifier \"HTTPClient\"; filename \"http:// 192.168.124.2 /redhat/EFI/BOOT/BOOTX64.EFI\"; } }", "systemctl enable --now dhcpd", "install dhcp-server", "option dhcp6.bootfile-url code 59 = string; option dhcp6.vendor-class code 16 = {integer 32, integer 16, string}; subnet6 fd33:eb1b:9b36::/64 { range6 fd33:eb1b:9b36::64 fd33:eb1b:9b36::c8 ; class \"PXEClient\" { match substring (option dhcp6.vendor-class, 6, 9); } subclass \"PXEClient\" \"PXEClient\" { option dhcp6.bootfile-url \"tftp:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; } class \"HTTPClient\" { match substring (option dhcp6.vendor-class, 6, 10); } subclass \"HTTPClient\" \"HTTPClient\" { option dhcp6.bootfile-url \"http:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; option dhcp6.vendor-class 0 10 \"HTTPClient\"; } }", "systemctl enable --now dhcpd6", "IPv6_rpfilter=no", "yum install httpd", "mkdir -p /var/www/html/redhat/", "mkdir -p /var/www/html/redhat/iso/", "mount -o loop,ro -t iso9660 path-to-RHEL-DVD.iso /var/www/html/redhat/iso", "cp -r /var/www/html/redhat/iso/images /var/www/html/redhat/ cp -r /var/www/html/redhat/iso/EFI /var/www/html/redhat/", "chmod 644 /var/www/html/redhat/EFI/BOOT/grub.cfg", "set default=\"1\" function load_video { insmod efi_gop insmod efi_uga insmod video_bochs insmod video_cirrus insmod all_video } load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod ext2 set timeout=60 # END /etc/grub.d/00_header # search --no-floppy --set=root -l ' RHEL-9-3-0-BaseOS-x86_64 ' # BEGIN /etc/grub.d/10_linux # menuentry 'Install Red Hat Enterprise Linux 9.3 ' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso quiet initrdefi ../../images/pxeboot/initrd.img } menuentry 'Test this media & install Red Hat Enterprise Linux 9.3 ' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso quiet initrdefi ../../images/pxeboot/initrd.img } submenu 'Troubleshooting -->' { menuentry 'Install Red Hat Enterprise Linux 9.3 in text mode' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso inst.text quiet initrdefi ../../images/pxeboot/initrd.img } menuentry 'Rescue a Red Hat Enterprise Linux system' --class fedora --class gnu-linux --class gnu --class os { linuxefi ../../images/pxeboot/vmlinuz inst.repo=http:// 192.168.124.2 /redhat/iso inst.rescue quiet initrdefi ../../images/pxeboot/initrd.img } }", "chmod 755 /var/www/html/redhat/EFI/BOOT/BOOTX64.EFI", "firewall-cmd --zone public --add-port={80/tcp,67/udp,68/udp,546/udp,547/udp}", "firewall-cmd --reload", "systemctl enable --now httpd", "chmod -cR u=rwX,g=rX,o=rX /var/www/html", "restorecon -FvvR /var/www/html", "install dhcp-server", "option architecture-type code 93 = unsigned integer 16; subnet 192.168.124.0 netmask 255.255.255.0 { option routers 192.168.124.1 ; option domain-name-servers 192.168.124.1 ; range 192.168.124.100 192.168.124.200 ; class \"pxeclients\" { match if substring (option vendor-class-identifier, 0, 9) = \"PXEClient\"; next-server 192.168.124.2 ; if option architecture-type = 00:07 { filename \"redhat/EFI/BOOT/BOOTX64.EFI\"; } else { filename \"pxelinux/pxelinux.0\"; } } class \"httpclients\" { match if substring (option vendor-class-identifier, 0, 10) = \"HTTPClient\"; option vendor-class-identifier \"HTTPClient\"; filename \"http:// 192.168.124.2 /redhat/EFI/BOOT/BOOTX64.EFI\"; } }", "systemctl enable --now dhcpd", "install dhcp-server", "option dhcp6.bootfile-url code 59 = string; option dhcp6.vendor-class code 16 = {integer 32, integer 16, string}; subnet6 fd33:eb1b:9b36::/64 { range6 fd33:eb1b:9b36::64 fd33:eb1b:9b36::c8 ; class \"PXEClient\" { match substring (option dhcp6.vendor-class, 6, 9); } subclass \"PXEClient\" \"PXEClient\" { option dhcp6.bootfile-url \"tftp:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; } class \"HTTPClient\" { match substring (option dhcp6.vendor-class, 6, 10); } subclass \"HTTPClient\" \"HTTPClient\" { option dhcp6.bootfile-url \"http:// [fd33:eb1b:9b36::2] /redhat/EFI/BOOT/BOOTX64.EFI\"; option dhcp6.vendor-class 0 10 \"HTTPClient\"; } }", "systemctl enable --now dhcpd6", "IPv6_rpfilter=no", "yum install tftp-server", "firewall-cmd --add-service=tftp", "mount -t iso9660 /path_to_image/name_of_image.iso /mount_point -o loop,ro", "cp -pr /mount_point/BaseOS/Packages/syslinux-tftpboot-version-architecture.rpm /my_local_directory", "umount /mount_point", "rpm2cpio syslinux-tftpboot-version-architecture.rpm | cpio -dimv", "mkdir /var/lib/tftpboot/pxelinux", "cp /my_local_directory/tftpboot/* /var/lib/tftpboot/pxelinux", "mkdir /var/lib/tftpboot/pxelinux/pxelinux.cfg", "default vesamenu.c32 prompt 1 timeout 600 display boot.msg label linux menu label ^Install system menu default kernel images/RHEL-8/vmlinuz append initrd=images/RHEL-8/initrd.img ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-8/x86_64/iso-contents-root/ label vesa menu label Install system with ^basic video driver kernel images/RHEL-8/vmlinuz append initrd=images/RHEL-8/initrd.img ip=dhcp inst.xdriver=vesa nomodeset inst.repo=http:// 192.168.124.2 /RHEL-8/x86_64/iso-contents-root/ label rescue menu label ^Rescue installed system kernel images/RHEL-8/vmlinuz append initrd=images/RHEL-8/initrd.img inst.rescue inst.repo=http:///192.168.124.2/RHEL-8/x86_64/iso-contents-root/ label local menu label Boot from ^local drive localboot 0xffff", "mkdir -p /var/lib/tftpboot/pxelinux/images/RHEL-8/ cp /path_to_x86_64_images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/images/RHEL-8/", "systemctl enable --now tftp.socket", "yum install tftp-server", "firewall-cmd --add-service=tftp", "mount -t iso9660 /path_to_image/name_of_image.iso /mount_point -o loop,ro", "mkdir /var/lib/tftpboot/redhat cp -r /mount_point/EFI /var/lib/tftpboot/redhat/ umount /mount_point", "chmod -R 755 /var/lib/tftpboot/redhat/", "set timeout=60 menuentry 'RHEL 8' { linux images/RHEL-8/vmlinuz ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-8/x86_64/iso-contents-root/ initrd images/RHEL-8/initrd.img }", "mkdir -p /var/lib/tftpboot/images/RHEL-8/ cp /path_to_x86_64_images/pxeboot/{vmlinuz,initrd.img}/var/lib/tftpboot/images/RHEL-8/", "systemctl enable --now tftp.socket", "yum install tftp-server dhcp-server", "firewall-cmd --add-service=tftp", "grub2-mknetdir --net-directory=/var/lib/tftpboot Netboot directory for powerpc-ieee1275 created. Configure your DHCP server to point to /boot/grub2/powerpc-ieee1275/core.elf", "yum install grub2-ppc64-modules", "set default=0 set timeout=5 echo -e \"\\nWelcome to the Red Hat Enterprise Linux 8 installer!\\n\\n\" menuentry 'Red Hat Enterprise Linux 8' { linux grub2-ppc64/vmlinuz ro ip=dhcp inst.repo=http:// 192.168.124.2 /RHEL-8/x86_64/iso-contents-root/ initrd grub2-ppc64/initrd.img }", "mount -t iso9660 /path_to_image/name_of_iso/ /mount_point -o loop,ro", "cp /mount_point/ppc/ppc64/{initrd.img,vmlinuz} /var/lib/tftpboot/grub2-ppc64/", "subnet 192.168.0.1 netmask 255.255.255.0 { allow bootp; option routers 192.168.0.5; group { #BOOTP POWER clients filename \"boot/grub2/powerpc-ieee1275/core.elf\"; host client1 { hardware ethernet 01:23:45:67:89:ab; fixed-address 192.168.0.112; } } }", "systemctl enable --now dhcpd", "systemctl enable --now tftp.socket", "mokutil --import /usr/share/doc/kernel-keys/USD(uname -r)/kernel-signing-ca.cer", "mokutil --reset", "ipmitool -I lanplus -H server_ip_address -U ipmi_user -P ipmi_password chassis power on", "ipmitool -I lanplus -H server_ip_address -U ipmi_user -P ipmi_password sol activate", "ipmitool -I lanplus -H server_ip_address -U user-name -P ipmi_password sol deactivate", "ipmitool -I lanplus -H server_ip_address -U user-name -P ipmi_password chassis power reset", "[USB: sdb1 / 2015-10-30-11-05-03-00] Rescue a Red Hat Enterprise Linux system (64-bit kernel) Test this media & install Red Hat Enterprise Linux 8 (64-bit kernel) * Install Red Hat Enterprise Linux 8 (64-bit kernel)", "inst.stage2=hd:UUID=your_UUID where your_UUID is the UUID that you recorded. Petitboot Option Editor qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq Device: ( ) sda2 [f8437496-78b8-4b11-9847-bb2d8b9f7cbd] (*) sdb1 [2015-10-30-11-05-03-00] ( ) Specify paths/URLs manually Kernel: /ppc/ppc64/vmlinuz Initrd: /ppc/ppc64/initrd.img Device tree: Boot arguments: ro inst.stage2=hd:UUID=2015-10-30-11-05-03-00 [ OK ] [ Help ] [ Cancel ]", "CD/DVD: sr0 Install Repair", "ipmitool lan set 1 ipsrc static", "ipmitool lan set 1 ipaddr _ip_address_", "ipmitool lan set 1 netmask _netmask_address_", "ipmitool lan set 1 defgw ipaddr _gateway_server_", "Where gateway_server is the gateway for this system.", "ipmitool raw 0x06 0x40.", "ssh root@<BMC server_ip_address> root@<BMC server password>", "root@witherspoon:~# obmcutil poweron", "ssh -p 2200 root@<BMC server_ip_address> root@", "http://<http_server_ip>/ppc/ppc64/vmlinuz", "http://<http_server_ip>/ppc/ppc64/initrd.img", "inst.repo=http://<http_server_ip>/<path> ifname=<ethernet_interface_name>:<mac_addr> ip=<os ip>::<gateway>:<2 digit mask>:<hostname>:<ethernet_interface_name>:none nameserver=<name_server>", "[USB: sdb1 / 2015-10-30-11-05-03-00] Rescue a Red Hat Enterprise Linux system (64-bit kernel) Test this media & install Red Hat Enterprise Linux 8 (64-bit kernel) * Install Red Hat Enterprise Linux 8 (64-bit kernel)", "inst.text inst.stage2=hd:UUID=your_UUID where your_UUID is the UUID that you recorded. Petitboot Option Editor qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq Device: ( ) sda2 [f8437496-78b8-4b11-9847-bb2d8b9f7cbd] (*) sdb1 [2015-10-30-11-05-03-00] ( ) Specify paths/URLs manually Kernel: /ppc/ppc64/vmlinuz Initrd: /ppc/ppc64/initrd.img Device tree: Boot arguments: ro inst.text inst.stage2=hd:UUID=2015-10-30-11-05-03-00 [ OK ] [ Help ] [ Cancel ]", "169.254.2.140 Subnet mask: 255.255.255.0 The default IP address of HMC1: 169.254.2.147", "ipmitool -I lanplus -H fsp_ip_address -P _ipmi_password_ power on", "ipmitool -I lanplus -H fsp_ip_address -P ipmi_password sol activate", "ipmitool -I lanplus -H fsp_ip_address -P ipmi_password sol deactivate", "ipmitool -I lanplus -H fsp_ip_address -P ipmi_password power off", "ipmitool -I lanplus -H fsp_ip_address -P ipmi_password power on", "ipmiutil power -u -N ipaddress -P ipmi_password", "ipmiutil sol -a -r -N ipaddress -P ipmi_password", "ipmiutil sol -d -N ipaddress -P ipmi_password", "ipmiutil power -d -N ipaddress -P ipmi_password", "ipmiutil power -u -N ipaddress -P ipmi_password", "Petitboot (v1.11) [Disk: sda2 / disk ] Red Hat Enterprise Linux ( system ) 8.x *[Encrypted Device: rhel device / device System information System configuration System status log Language Rescan devices Retrieve config from URL Plugins (0) Exit to shell", "rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno= <number>", "rd.dasd=0.0.0200 rd.dasd=0.0.0202(ro),0.0.0203(ro:failfast),0.0.0205-0.0.0207", "rd.zfcp=0.0.4000,0x5005076300C213e9,0x5022000000000000 rd.zfcp=0.0.4000", "ro ramdisk_size=40000 cio_ignore=all,!condev inst.repo=http://example.com/path/to/repository rd.znet=qeth,0.0.0600,0.0.0601,0.0.0602,layer2=1,portno=0,portname=foo ip=192.168.17.115::192.168.17.254:24:foobar.systemz.example.com:enc600:none nameserver=192.168.17.1 rd.dasd=0.0.0200 rd.dasd=0.0.0202 rd.zfcp=0.0.4000,0x5005076300c213e9,0x5022000000000000 rd.zfcp=0.0.5000,0x5005076300dab3e9,0x5022000000000000 inst.ks=http://example.com/path/to/kickstart", "images/kernel.img 0x00000000 images/initrd.img 0x02000000 images/genericdvd.prm 0x00010480 images/initrd.addrsize 0x00010408", "qeth: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id , data_device_bus_id \" lcs or ctc: SUBCHANNELS=\" read_device_bus_id , write_device_bus_id \"", "SUBCHANNELS=\"0.0.f5f0,0.0.f5f1,0.0.f5f2\"", "DNS=\"10.1.2.3:10.3.2.1\"", "SEARCHDNS=\"subdomain.domain:domain\"", "DASD=\"eb1c,0.0.a000-0.0.a003,eb10-eb14(diag),0.0.ab1c(ro:diag)\"", "FCP_ n =\" device_bus_ID [ WWPN FCP_LUN ]\"", "FCP_1=\"0.0.fc00 0x50050763050b073d 0x4020400100000000\" FCP_2=\"0.0.4000\"", "inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/ inst.stage2=http://hostname/path_to_install_tree/", "ro ramdisk_size=40000 cio_ignore=all,!condev CMSDASD=\"191\" CMSCONFFILE=\"redhat.conf\" inst.vnc inst.repo=http://example.com/path/to/dvd-contents", "NETTYPE=\"qeth\" SUBCHANNELS=\"0.0.0600,0.0.0601,0.0.0602\" PORTNAME=\"FOOBAR\" PORTNO=\"0\" LAYER2=\"1\" MACADDR=\"02:00:be:3a:01:f3\" HOSTNAME=\"foobar.systemz.example.com\" IPADDR=\"192.168.17.115\" NETMASK=\"255.255.255.0\" GATEWAY=\"192.168.17.254\" DNS=\"192.168.17.1\" SEARCHDNS=\"systemz.example.com:example.com\" DASD=\"200-203\"", "logon user here", "cp ipl cms", "query disk", "cp query virtual storage", "cp query virtual osa", "cp query virtual dasd", "cp query virtual fcp", "virt-install --name=<guest_name> --disk size=<disksize_in_GB> --memory=<memory_size_in_MB> --cdrom <filepath_to_iso> --graphics vnc", "cp link tcpmaint 592 592 acc 592 fm", "ftp <host> (secure", "cd / location/of/install-tree /images/ ascii get generic.prm (repl get redhat.exec (repl locsite fix 80 binary get kernel.img (repl get initrd.img (repl quit", "VMUSER FILELIST A0 V 169 Trunc=169 Size=6 Line=1 Col=1 Alt=0 Cmd Filename Filetype Fm Format Lrecl Records Blocks Date Time REDHAT EXEC B1 V 22 1 1 4/15/10 9:30:40 GENERIC PRM B1 V 44 1 1 4/15/10 9:30:32 INITRD IMG B1 F 80 118545 2316 4/15/10 9:30:25 KERNEL IMG B1 F 80 74541 912 4/15/10 9:30:17", "redhat", "cp ipl DASD_device_number loadparm boot_entry_number", "cp ipl eb1c loadparm 0", "cp set loaddev portname WWPN lun LUN bootprog boot_entry_number", "cp set loaddev portname 50050763 050b073d lun 40204011 00000000 bootprog 0", "query loaddev", "cp ipl FCP_device", "cp ipl fc00", "cp set loaddev portname WWPN lun FCP_LUN bootprog 1", "cp set loaddev portname 20010060 eb1c0103 lun 00010000 00000000 bootprog 1", "cp query loaddev", "cp ipl FCP_device", "cp ipl fc00", ">vmlinuz initrd=initrd.img inst.stage2=hd:LABEL=RHEL-9-5-0-BaseOS-x86_64 rd.live.check quiet fips=1", "linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-4-0-BaseOS-x86_64 rd.live. check quiet fips=1", "modprobe.blacklist=ahci", "vncviewer -listen PORT", "TigerVNC Viewer 64-bit v1.8.0 Built on: 2017-10-12 09:20 Copyright (C) 1999-2017 TigerVNC Team and many others (see README.txt) See http://www.tigervnc.org for information about TigerVNC. Thu Jun 27 11:30:57 2019 main: Listening on port 5500", "oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml", "subscription-manager register --activationkey= <activation_key_name> --org= <organization_ID>", "The system has been registered with id: 62edc0f8-855b-4184-b1b8-72a9dc793b96", "subscription-manager syspurpose role --set \"VALUE\"", "subscription-manager syspurpose role --set \"Red Hat Enterprise Linux Server\"", "subscription-manager syspurpose role --list", "subscription-manager syspurpose role --unset", "subscription-manager syspurpose service-level --set \"VALUE\"", "subscription-manager syspurpose service-level --set \"Standard\"", "subscription-manager syspurpose service-level --list", "subscription-manager syspurpose service-level --unset", "subscription-manager syspurpose usage --set \"VALUE\"", "subscription-manager syspurpose usage --set \"Production\"", "subscription-manager syspurpose usage --list", "subscription-manager syspurpose usage --unset", "subscription-manager syspurpose --show", "man subscription-manager", "subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Current System Purpose Status: Matched", "subscription-manager status +-------------------------------------------+ System Status Details +-------------------------------------------+ Overall Status: Disabled Content Access Mode is set to Simple Content Access. This host has access to content, regardless of subscription status. System Purpose Status: Disabled", "CP ATTACH EB1C TO *", "CP LINK RHEL7X 4B2E 4B2E MR DASD 4B2E LINKED R/W", "cio_ignore -r device_number", "cio_ignore -r 4b2e", "chccwdev -e device_number", "chccwdev -e 4b2e", "cd /root # dasdfmt -b 4096 -d cdl -p /dev/disk/by-path/ccw-0.0.4b2e Drive Geometry: 10017 Cylinders * 15 Heads = 150255 Tracks I am going to format the device /dev/disk/by-path/ccw-0.0.4b2e in the following way: Device number of device : 0x4b2e Labelling device : yes Disk label : VOL1 Disk identifier : 0X4B2E Extent start (trk no) : 0 Extent end (trk no) : 150254 Compatible Disk Layout : yes Blocksize : 4096 --->> ATTENTION! <<--- All data of that device will be lost. Type \"yes\" to continue, no will leave the disk untouched: yes cyl 97 of 3338 |#----------------------------------------------| 2%", "Rereading the partition table Exiting", "fdasd -a /dev/disk/by-path/ccw-0.0.4b2e reading volume label ..: VOL1 reading vtoc ..........: ok auto-creating one partition for the whole disk writing volume label writing VTOC rereading partition table", "machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf", "title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa) version 4.18.0-80.el8.s390x linux /boot/vmlinuz-4.18.0-80.el8.s390x initrd /boot/initramfs-4.18.0-80.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-80.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "title Red Hat Enterprise Linux (4.18.0-80.el8.s390x) 8.0 (Ootpa) version 4.18.0-80.el8.s390x linux /boot/vmlinuz-4.18.0-80.el8.s390x initrd /boot/initramfs-4.18.0-80.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-80.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "zipl -V Using config file '/etc/zipl.conf' Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-80.el8.s390x.conf' Target device information Device..........................: 5e:00 Partition.......................: 5e:01 Device name.....................: dasda Device driver name..............: dasd DASD device number..............: 0201 Type............................: disk partition Disk layout.....................: ECKD/compatible disk layout Geometry - heads................: 15 Geometry - sectors..............: 12 Geometry - cylinders............: 13356 Geometry - start................: 24 File system block size..........: 4096 Physical block size.............: 4096 Device size in physical blocks..: 262152 Building bootmap in '/boot' Building menu 'zipl-automatic-menu' Adding #1: IPL section '4.18.0-80.el8.s390x' (default) initial ramdisk...: /boot/initramfs-4.18.0-80.el8.s390x.img kernel image......: /boot/vmlinuz-4.18.0-80.el8.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.dasd=0.0.0200 rd.dasd=0.0.0207 rd.dasd=0.0.202b rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0' component address: kernel image....: 0x00010000-0x0049afff parmline........: 0x0049b000-0x0049bfff initial ramdisk.: 0x004a0000-0x01a26fff internal loader.: 0x0000a000-0x0000cfff Preparing boot menu Interactive prompt......: enabled Menu timeout............: 5 seconds Default configuration...: '4.18.0-80.el8.s390x' Preparing boot device: dasda (0201). Syncing disks Done.", "0.0.0207 0.0.0200 use_diag=1 readonly=1", "cio_ignore -r device_number", "cio_ignore -r 021a", "echo add > /sys/bus/ccw/devices/ dasd-bus-ID /uevent", "echo add > /sys/bus/ccw/devices/0.0.021a/uevent", "machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf", "title Red Hat Enterprise Linux (4.18.0-32.el8.s390x) 8.0 (Ootpa) version 4.18.0-32.el8.s390x linux /boot/vmlinuz-4.18.0-32.el8.s390x initrd /boot/initramfs-4.18.0-32.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a100000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-32.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "title Red Hat Enterprise Linux (4.18.0-32.el8.s390x) 8.0 (Ootpa) version 4.18.0-32.el8.s390x linux /boot/vmlinuz-4.18.0-32.el8.s390x initrd /boot/initramfs-4.18.0-32.el8.s390x.img options root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a100000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0 id rhel-20181027190514-4.18.0-32.el8.s390x grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "zipl -V Using config file '/etc/zipl.conf' Using BLS config file '/boot/loader/entries/4ab74e52867b4f998e73e06cf23fd761-4.18.0-32.el8.s390x.conf' Target device information Device..........................: 08:00 Partition.......................: 08:01 Device name.....................: sda Device driver name..............: sd Type............................: disk partition Disk layout.....................: SCSI disk layout Geometry - start................: 2048 File system block size..........: 4096 Physical block size.............: 512 Device size in physical blocks..: 10074112 Building bootmap in '/boot/' Building menu 'rh-automatic-menu' Adding #1: IPL section '4.18.0-32.el8.s390x' (default) kernel image......: /boot/vmlinuz-4.18.0-32.el8.s390x kernel parmline...: 'root=/dev/mapper/vg_devel1-lv_root crashkernel=auto rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a000000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a100000000 rd.zfcp=0.0.fc00,0x5105074308c212e9,0x401040a300000000 rd.lvm.lv=vg_devel1/lv_root rd.lvm.lv=vg_devel1/lv_swap cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0' initial ramdisk...: /boot/initramfs-4.18.0-32.el8.s390x.img component address: kernel image....: 0x00010000-0x007a21ff parmline........: 0x00001000-0x000011ff initial ramdisk.: 0x02000000-0x028f63ff internal loader.: 0x0000a000-0x0000a3ff Preparing boot device: sda. Detected SCSI PCBIOS disk layout. Writing SCSI master boot record. Syncing disks Done.", "0.0.fc00 0x5105074308c212e9 0x401040a000000000 0.0.fc00 0x5105074308c212e9 0x401040a100000000 0.0.fc00 0x5105074308c212e9 0x401040a300000000 0.0.fcd0 0x5105074308c2aee9 0x401040a000000000 0.0.fcd0 0x5105074308c2aee9 0x401040a100000000 0.0.fcd0 0x5105074308c2aee9 0x401040a300000000 0.0.4000 0.0.5000", "cio_ignore -r device_number", "cio_ignore -r fcfc", "echo add > /sys/bus/ccw/devices/device-bus-ID/uevent", "echo add > /sys/bus/ccw/devices/0.0.fcfc/uevent", "lsmod | grep qeth qeth_l3 69632 0 qeth_l2 49152 1 qeth 131072 2 qeth_l3,qeth_l2 qdio 65536 3 qeth,qeth_l3,qeth_l2 ccwgroup 20480 1 qeth", "modprobe qeth", "cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id", "cio_ignore -r 0.0.f500,0.0.f501,0.0.f502", "znetconf -u Scanning for network devices Device IDs Type Card Type CHPID Drv. ------------------------------------------------------------ 0.0.f500,0.0.f501,0.0.f502 1731/01 OSA (QDIO) 00 qeth 0.0.f503,0.0.f504,0.0.f505 1731/01 OSA (QDIO) 01 qeth 0.0.0400,0.0.0401,0.0.0402 1731/05 HiperSockets 02 qeth", "znetconf -a f500 Scanning for network devices Successfully configured device 0.0.f500 (encf500)", "znetconf -a f500 -o portname=myname Scanning for network devices Successfully configured device 0.0.f500 (encf500)", "echo read_device_bus_id,write_device_bus_id,data_device_bus_id > /sys/bus/ccwgroup/drivers/qeth/group", "echo 0.0.f500,0.0.f501,0.0.f502 > /sys/bus/ccwgroup/drivers/qeth/group", "ls /sys/bus/ccwgroup/drivers/qeth/0.0.f500", "echo 1 > /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online", "cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/online 1", "cat /sys/bus/ccwgroup/drivers/qeth/0.0.f500/if_name encf500", "lsqeth encf500 Device name : encf500 ------------------------------------------------- card_type : OSD_1000 cdev0 : 0.0.f500 cdev1 : 0.0.f501 cdev2 : 0.0.f502 chpid : 76 online : 1 portname : OSAPORT portno : 0 state : UP (LAN ONLINE) priority_queueing : always queue 0 buffer_count : 16 layer2 : 1 isolation : none", "cd /etc/sysconfig/network-scripts # cp ifcfg-enc9a0 ifcfg-enc600", "lsqeth -p devices CHPID interface cardtype port chksum prio-q'ing rtr4 rtr6 lay'2 cnt -------------------------- ----- ---------------- -------------- ---- ------ ---------- ---- ---- ----- ----- 0.0.09a0/0.0.09a1/0.0.09a2 x00 enc9a0 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64 0.0.0600/0.0.0601/0.0.0602 x00 enc600 Virt.NIC QDIO 0 sw always_q_2 n/a n/a 1 64", "IBM QETH DEVICE=enc9a0 BOOTPROTO=static IPADDR=10.12.20.136 NETMASK=255.255.255.0 ONBOOT=yes NETTYPE=qeth SUBCHANNELS=0.0.09a0,0.0.09a1,0.0.09a2 PORTNAME=OSAPORT OPTIONS='layer2=1 portno=0' MACADDR=02:00:00:23:65:1a TYPE=Ethernet", "IBM QETH DEVICE=enc600 BOOTPROTO=static IPADDR=192.168.70.87 NETMASK=255.255.255.0 ONBOOT=yes NETTYPE=qeth SUBCHANNELS=0.0.0600,0.0.0601,0.0.0602 PORTNAME=OSAPORT OPTIONS='layer2=1 portno=0' MACADDR=02:00:00:b3:84:ef TYPE=Ethernet", "cio_ignore -r read_device_bus_id,write_device_bus_id,data_device_bus_id", "cio_ignore -r 0.0.0600,0.0.0601,0.0.0602", "echo add > /sys/bus/ccw/devices/read-channel/uevent", "echo add > /sys/bus/ccw/devices/0.0.0600/uevent", "lsqeth", "ifup enc600", "ip addr show enc600 3: enc600: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 3c:97:0e:51:38:17 brd ff:ff:ff:ff:ff:ff inet 10.85.1.245/24 brd 10.34.3.255 scope global dynamic enc600 valid_lft 81487sec preferred_lft 81487sec inet6 1574:12:5:1185:3e97:eff:fe51:3817/64 scope global noprefixroute dynamic valid_lft 2591994sec preferred_lft 604794sec inet6 fe45::a455:eff:d078:3847/64 scope link valid_lft forever preferred_lft forever", "ip route default via 10.85.1.245 dev enc600 proto static metric 1024 12.34.4.95/24 dev enp0s25 proto kernel scope link src 12.34.4.201 12.38.4.128 via 12.38.19.254 dev enp0s25 proto dhcp metric 1 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1", "ping -c 1 192.168.70.8 PING 192.168.70.8 (192.168.70.8) 56(84) bytes of data. 64 bytes from 192.168.70.8: icmp_seq=0 ttl=63 time=8.07 ms", "machine_id=USD(cat /etc/machine-id) kernel_version=USD(uname -r) ls /boot/loader/entries/USDmachine_id-USDkernel_version.conf", "root=10.16.105.196:/nfs/nfs_root cio_ignore=all,!condev rd.znet=qeth,0.0.0a00,0.0.0a01,0.0.0a02,layer2=1,portno=0,portname=OSAPORT ip=10.16.105.197:10.16.105.196:10.16.111.254:255.255.248.0:nfs‐server.subdomain.domain:enc9a0:none rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us", "subscription-manager unregister", "vmlinuz ... inst.debug", "cd /tmp/pre-anaconda-logs/", "dmesg", "[ 170.171135] sd 5:0:0:0: [sdb] Attached SCSI removable disk", "mkdir usb", "mount /dev/sdb1 /mnt/usb", "cd /mnt/usb", "ls", "cp /tmp/*log /mnt/usb", "umount /mnt/usb", "cd /tmp", "scp *log user@address:path", "scp *log [email protected]:/home/john/logs/", "The authenticity of host '192.168.0.122 (192.168.0.122)' can't be established. ECDSA key fingerprint is a4:60:76:eb:b2:d0:aa:23:af:3d:59:5c:de:bb:c4:42. Are you sure you want to continue connecting (yes/no)?", "curl --output directory-path/filename.iso 'new_copied_link_location' --continue-at -", "sha256sum rhel-x.x-x86_64-dvd.iso `85a...46c rhel-x.x-x86_64-dvd.iso`", "curl --output _rhel-x.x-x86_64-dvd.iso 'https://access.cdn.redhat.com//content/origin/files/sha256/85/85a...46c/rhel-x.x-x86_64-dvd.iso?_auth =141...963' --continue-at -", "grubby --default-kernel /boot/vmlinuz-4.18.0-94.el8.x86_64", "grubby --remove-args=\"rhgb\" --update-kernel /boot/vmlinuz-4.18.0-94.el8.x86_64", "df -h", "Filesystem Size Used Avail Use% Mounted on devtmpfs 396M 0 396M 0% /dev tmpfs 411M 0 411M 0% /dev/shm tmpfs 411M 6.7M 405M 2% /run tmpfs 411M 0 411M 0% /sys/fs/cgroup /dev/mapper/rhel-root 17G 4.1G 13G 25% / /dev/sda1 1014M 173M 842M 17% /boot tmpfs 83M 20K 83M 1% /run/user/42 tmpfs 83M 84K 83M 1% /run/user/1000 /dev/dm-4 90G 90G 0 100% /home", "free -m", "mem= xx M", "free -m", "grubby --update-kernel=ALL --args=\"mem= xx M\"", "Enable=true", "systemctl restart gdm.service", "X :1 -query address", "Xnest :1 -query address", "inst.rescue inst.dd=driver_name", "inst.rescue modprobe.blacklist=driver_name", "The rescue environment will now attempt to find your Linux installation and mount it under the directory: /mnt/sysroot/. You can then make any changes required to your system. Choose 1 to proceed with this step. You can choose to mount your file systems read-only instead of read-write by choosing 2 . If for some reason this process does not work choose 3 to skip directly to a shell. 1) Continue 2) Read-only mount 3) Skip to shell 4) Quit (Reboot)", "sh-4.2#", "sh-4.2# chroot /mnt/sysroot", "sh-4.2# mount -t xfs /dev/mapper/VolGroup00-LogVol02 /directory", "sh-4.2# fdisk -l", "sh-4.2# chroot /mnt/sysroot/", "sh-4.2# sosreport", "bash-4.2# ip addr add 10.13.153.64/23 dev eth0", "sh-4.2# exit", "sh-4.2# cp /mnt/sysroot/var/tmp/sosreport new_location", "sh-4.2# scp /mnt/sysroot/var/tmp/sosreport username@hostname:sosreport", "sh-4.2# chroot /mnt/sysroot/", "sh-4.2# /sbin/grub2-install install_device", "sh-4.2# chroot /mnt/sysroot/", "sh-4.2# yum install /root/drivers/xorg-x11-drv-wacom-0.23.0-6.el7.x86_64.rpm", "sh-4.2# exit", "sh-4.2# chroot /mnt/sysroot/", "sh-4.2# yum remove xorg-x11-drv-wacom", "sh-4.2# exit", "ip=192.168.1.15 netmask=255.255.255.0 gateway=192.168.1.254 nameserver=192.168.1.250 hostname=myhost1", "ip=192.168.1.15::192.168.1.254:255.255.255.0:myhost1::none: nameserver=192.168.1.250", "inst.xtimeout= N", "[ ...] rootfs image is not initramfs", "sha256sum dvd/images/pxeboot/initrd.img fdb1a70321c06e25a1ed6bf3d8779371b768d5972078eb72b2c78c925067b5d8 dvd/images/pxeboot/initrd.img", "grep sha256 dvd/.treeinfo images/efiboot.img = sha256: d357d5063b96226d643c41c9025529554a422acb43a4394e4ebcaa779cc7a917 images/install.img = sha256: 8c0323572f7fc04e34dd81c97d008a2ddfc2cfc525aef8c31459e21bf3397514 images/pxeboot/initrd.img = sha256: fdb1a70321c06e25a1ed6bf3d8779371b768d5972078eb72b2c78c925067b5d8 images/pxeboot/vmlinuz = sha256: b9510ea4212220e85351cbb7f2ebc2b1b0804a6d40ccb93307c165e16d1095db", "[ ...] No filesystem could mount root, tried: [ ...] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(1,0) [ ...] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-55.el9.s390x #1 [ ...] [ ...] Call Trace: [ ...] ([<...>] show_trace+0x.../0x...) [ ...] [<...>] show_stack+0x.../0x [ ...] [<...>] panic+0x.../0x [ ...] [<...>] mount_block_root+0x.../0x [ ...] [<...>] prepare_namespace+0x.../0x [ ...] [<...>] kernel_init_freeable+0x.../0x [ ...] [<...>] kernel_init+0x.../0x [ ...] [<...>] kernel_thread_starter+0x.../0x [ ...] [<...>] kernel_thread_starter+0x.../0x...", "inst.stage2=https://hostname/path_to_install_image/ inst.noverifyssl", "inst.repo=https://hostname/path_to_install_repository/ inst.noverifyssl", "inst.stage2.all inst.stage2=http://hostname1/path_to_install_tree/ inst.stage2=http://hostname2/path_to_install_tree/ inst.stage2=http://hostname3/path_to_install_tree/", "[PROTOCOL://][USERNAME[:PASSWORD]@]HOST[:PORT]", "inst.nosave=Input_ks,logs", "ifname=eth0:01:23:45:67:89:ab", "vlan=vlan5:enp0s1", "bond=bond0:enp0s1,enp0s2:mode=active-backup,tx_queues=32,downdelay=5000", "team=team0:enp0s1,enp0s2", "bridge=bridge0:enp0s1,enp0s2", "modprobe.blacklist=ahci,firewire_ohci", "modprobe.blacklist=virtio_blk" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/interactively_installing_rhel_over_the_network/index
Creating and Consuming Execution Environments
Creating and Consuming Execution Environments Red Hat Ansible Automation Platform 2.3 Create consistent and reproducible automation execution environments for your Red Hat Ansible Automation Platform. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/creating_and_consuming_execution_environments/index
2. More to Come
2. More to Come The Red Hat Enterprise Linux Introduction to System Adminitration is part of Red Hat, Inc's growing commitment to provide useful and timely support to Red Hat Enterprise Linux users. As new releases of Red Hat Enterprise Linux are made available, we make every effort to include both new and improved documentation for you. 2.1. Send in Your Feedback If you spot a typo in the Red Hat Enterprise Linux Introduction to System Adminitration , or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla ( http://bugzilla.redhat.com/bugzilla ) against the component rhel-isa . If you mention this manual's identifier, we will know exactly which version of the guide you have. If you have a suggestion for improving the documentation, try to be as specific as possible. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-intro-more-to-come
Chapter 13. Optimizing networking
Chapter 13. Optimizing networking The OpenShift SDN uses OpenvSwitch, virtual extensible LAN (VXLAN) tunnels, OpenFlow rules, and iptables. This network can be tuned by using jumbo frames, network interface controllers (NIC) offloads, multi-queue, and ethtool settings. OVN-Kubernetes uses Geneve (Generic Network Virtualization Encapsulation) instead of VXLAN as the tunnel protocol. VXLAN provides benefits over VLANs, such as an increase in networks from 4096 to over 16 million, and layer 2 connectivity across physical networks. This allows for all pods behind a service to communicate with each other, even if they are running on different systems. VXLAN encapsulates all tunneled traffic in user datagram protocol (UDP) packets. However, this leads to increased CPU utilization. Both these outer- and inner-packets are subject to normal checksumming rules to guarantee data is not corrupted during transit. Depending on CPU performance, this additional processing overhead can cause a reduction in throughput and increased latency when compared to traditional, non-overlay networks. Cloud, VM, and bare metal CPU performance can be capable of handling much more than one Gbps network throughput. When using higher bandwidth links such as 10 or 40 Gbps, reduced performance can occur. This is a known issue in VXLAN-based environments and is not specific to containers or OpenShift Container Platform. Any network that relies on VXLAN tunnels will perform similarly because of the VXLAN implementation. If you are looking to push beyond one Gbps, you can: Evaluate network plug-ins that implement different routing techniques, such as border gateway protocol (BGP). Use VXLAN-offload capable network adapters. VXLAN-offload moves the packet checksum calculation and associated CPU overhead off of the system CPU and onto dedicated hardware on the network adapter. This frees up CPU cycles for use by pods and applications, and allows users to utilize the full bandwidth of their network infrastructure. VXLAN-offload does not reduce latency. However, CPU utilization is reduced even in latency tests. 13.1. Optimizing the MTU for your network There are two important maximum transmission units (MTUs): the network interface controller (NIC) MTU and the cluster network MTU. The NIC MTU is only configured at the time of OpenShift Container Platform installation. The MTU must be less than or equal to the maximum supported value of the NIC of your network. If you are optimizing for throughput, choose the largest possible value. If you are optimizing for lowest latency, choose a lower value. The SDN overlay's MTU must be less than the NIC MTU by 50 bytes at a minimum. This accounts for the SDN overlay header. So, on a normal ethernet network, set this to 1450 . On a jumbo frame ethernet network, set this to 8950 . For OVN and Geneve, the MTU must be less than the NIC MTU by 100 bytes at a minimum. Note This 50 byte overlay header is relevant to the OpenShift SDN. Other SDN solutions might require the value to be more or less. 13.2. Recommended practices for installing large scale clusters When installing large clusters or scaling the cluster to larger node counts, set the cluster network cidr accordingly in your install-config.yaml file before you install the cluster: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineCIDR: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 The default cluster network cidr 10.128.0.0/14 cannot be used if the cluster size is more than 500 nodes. It must be set to 10.128.0.0/12 or 10.128.0.0/10 to get to larger node counts beyond 500 nodes. 13.3. Impact of IPsec Because encrypting and decrypting node hosts uses CPU power, performance is affected both in throughput and CPU usage on the nodes when encryption is enabled, regardless of the IP security system being used. IPSec encrypts traffic at the IP payload level, before it hits the NIC, protecting fields that would otherwise be used for NIC offloading. This means that some NIC acceleration features might not be usable when IPSec is enabled and will lead to decreased throughput and increased CPU usage. Additional resources Modifying advanced network configuration parameters Configuration parameters for the OVN-Kubernetes default CNI network provider Configuration parameters for the OpenShift SDN default CNI network provider
[ "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineCIDR: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/scalability_and_performance/optimizing-networking
Config APIs
Config APIs OpenShift Container Platform 4.12 Reference guide for config APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/config_apis/index
macro::json_output_data_start
macro::json_output_data_start Name macro::json_output_data_start - Start the json output. Synopsis Arguments None Description The json_output_data_start macro is designed to be called from the 'json_data' probe from the user's script. It marks the start of the JSON output.
[ "@json_output_data_start()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-json-output-data-start
Virtualization
Virtualization OpenShift Container Platform 4.16 OpenShift Virtualization installation, usage, and release notes Red Hat OpenShift Documentation Team
[ "oc get scc kubevirt-controller -o yaml", "oc get clusterrole kubevirt-controller -o yaml", "tar -xvf <virtctl-version-distribution.arch>.tar.gz", "chmod +x <path/virtctl-file-name>", "echo USDPATH", "export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig", "C:\\> path", "echo USDPATH", "subscription-manager repos --enable cnv-4.16-for-rhel-8-x86_64-rpms", "yum install kubevirt-virtctl", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"add\", \"path\": \"/spec/featureGates\", \"value\": \"HotplugVolumes\"}]'", "virtctl vmexport download <vmexport_name> --vm|pvc=<object_name> --volume=<volume_name> --output=<output_file>", "virtctl guestfs -n <namespace> <pvc_name> 1", "Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)", "Memory overhead per infrastructure node ~ 150 MiB", "Memory overhead per worker node ~ 360 MiB", "Memory overhead per virtual machine ~ (1.002 x requested memory) + 218 MiB \\ 1 + 8 MiB x (number of vCPUs) \\ 2 + 16 MiB x (number of graphics devices) \\ 3 + (additional memory overhead) 4", "CPU overhead for infrastructure nodes ~ 4 cores", "CPU overhead for worker nodes ~ 2 cores + CPU overhead per virtual machine", "Aggregated storage overhead per node ~ 10 GiB", "apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.16.5 channel: \"stable\" 1", "oc apply -f <file name>.yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:", "oc apply -f <file_name>.yaml", "watch oc get csv -n openshift-cnv", "NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.16.5 OpenShift Virtualization 4.16.5 Succeeded", "oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv", "oc delete subscription kubevirt-hyperconverged -n openshift-cnv", "oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "oc delete namespace openshift-cnv", "oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "customresourcedefinition.apiextensions.k8s.io \"cdis.cdi.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hostpathprovisioners.hostpathprovisioner.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hyperconvergeds.hco.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"kubevirts.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"ssps.ssp.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"tektontasks.tektontasks.kubevirt.io\" deleted (dry run)", "oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "oc edit <resource_type> <resource_name> -n {CNVNamespace}", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.16.5 channel: \"stable\" config: nodeSelector: example.io/example-infra-key: example-infra-value 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.16.5 channel: \"stable\" config: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" 1 effect: \"NoSchedule\"", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value 1 workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value 2", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value 1 workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key 2 operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8 3", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: 1 - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value 1", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3", "oc create -f storageclass_csi.yaml", "oc adm new-project wasp", "oc create sa -n wasp wasp", "oc create clusterrolebinding wasp --clusterrole=cluster-admin --serviceaccount=wasp:wasp", "oc adm policy add-scc-to-user -n wasp privileged -z wasp", "kind: DaemonSet apiVersion: apps/v1 metadata: name: wasp-agent namespace: wasp labels: app: wasp tier: node spec: selector: matchLabels: name: wasp template: metadata: annotations: description: >- Configures swap for workloads labels: name: wasp spec: serviceAccountName: wasp hostPID: true hostUsers: true terminationGracePeriodSeconds: 5 containers: - name: wasp-agent image: >- registry.redhat.io/container-native-virtualization/wasp-agent-rhel9:v4.16 imagePullPolicy: Always env: - name: \"FSROOT\" value: \"/host\" resources: requests: cpu: 100m memory: 50M securityContext: privileged: true volumeMounts: - name: host mountPath: \"/host\" volumes: - name: host hostPath: path: \"/\" priorityClassName: system-node-critical updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 10% maxSurge: 0 status: {}", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' # MCP #machine.openshift.io/cluster-api-machine-role: worker # machine #node-role.kubernetes.io/worker: '' # node kubeletConfig: failSwapOn: false evictionSoft: memory.available: \"1Gi\" evictionSoftGracePeriod: memory.available: \"10s\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config spec kubeletConfig: evictionSoft: memory.available: 1Gi evictionSoftGracePeriod: memory.available: 1m30s failSwapOn: false", "oc wait mcp worker --for condition=Updated=True", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 90-worker-swap spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Provision and enable swap ConditionFirstBoot=no [Service] Type=oneshot Environment=SWAP_SIZE_MB=5000 ExecStart=/bin/sh -c \"sudo dd if=/dev/zero of=/var/tmp/swapfile count=USD{SWAP_SIZE_MB} bs=1M && sudo chmod 600 /var/tmp/swapfile && sudo mkswap /var/tmp/swapfile && sudo swapon /var/tmp/swapfile && free -h && sudo systemctl set-property --runtime system.slice MemorySwapMax=0 IODeviceLatencyTargetSec=\\\"/ 50ms\\\"\" [Install] RequiredBy=kubelet-dependencies.target enabled: true name: swap-provision.service", "NODE_SWAP_SPACE = NODE_RAM * (MEMORY_OVER_COMMIT_PERCENT / 100% - 1)", "NODE_SWAP_SPACE = 16 GB * (150% / 100% - 1) = 16 GB * (1.5 - 1) = 16 GB * (0.5) = 8 GB", "apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: wasp-alerts namespace: openshift-monitoring spec: groups: - name: wasp.rules rules: - alert: NodeSwapping annotations: description: Node {{ USDlabels.instance }} is swapping at a rate of {{ printf \"%.2f\" USDvalue }} MB/s runbook_url: https://github.com/openshift-virtualization/wasp-agent/tree/main/runbooks/alerts/NodeSwapping.md summary: A node is swapping memory pages expr: | # In MB/s irate(node_memory_SwapFree_bytes{job=\"node-exporter\"}[5m]) / 1024^2 > 0 for: 1m labels: severity: critical", "oc -n openshift-cnv patch HyperConverged/kubevirt-hyperconverged --type='json' -p='[ { \"op\": \"replace\", \"path\": \"/spec/higherWorkloadDensity/memoryOvercommitPercentage\", \"value\": 150 } ]'", "hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched", "oc rollout status ds wasp-agent -n wasp", "daemon set \"wasp-agent\" successfully rolled out", "oc get nodes -l node-role.kubernetes.io/worker", "oc debug node/<selected-node> -- free -m", "oc -n openshift-cnv get HyperConverged/kubevirt-hyperconverged -o jsonpath='{.spec.higherWorkloadDensity}{\"\\n\"}'", "{\"memoryOvercommitPercentage\":150}", "oc get csv -n openshift-cnv", "VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'", "ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: \"1m0s\" 5", "oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces", "oc get kv kubevirt-kubevirt-hyperconverged -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":[]}]'", "hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.conditions\"", "[ { \"lastTransitionTime\": \"2022-12-09T16:29:11Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"ReconcileComplete\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"Available\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"False\", \"type\": \"Progressing\" }, { \"lastTransitionTime\": \"2022-12-09T16:39:11Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"False\", \"type\": \"Degraded\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"Upgradeable\" 1 } ]", "oc adm upgrade", "oc get clusterversion", "oc get csv -n openshift-cnv", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.versions\"", "[ { \"name\": \"operator\", \"version\": \"4.16.5\" } ]", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.conditions\"", "oc get clusterversion", "oc get csv -n openshift-cnv", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \"[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"/spec/workloadUpdateStrategy/workloadUpdateMethods\\\", \\\"value\\\":{WorkloadUpdateMethodConfig}}]\"", "hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched", "oc get vmim -A", "apiVersion: instancetype.kubevirt.io/v1beta1 kind: VirtualMachineInstancetype metadata: name: example-instancetype spec: cpu: guest: 1 1 memory: guest: 128Mi 2", "virtctl create instancetype --cpu 2 --memory 256Mi", "virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-9-minimal spec: dataVolumeTemplates: - metadata: name: rhel-9-minimal-volume spec: sourceRef: kind: DataSource name: rhel9 1 namespace: openshift-virtualization-os-images 2 storage: {} instancetype: name: u1.medium 3 preference: name: rhel.9 4 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: rhel-9-minimal-volume name: rootdisk", "oc create -f <vm_manifest_file>.yaml", "virtctl start <vm_name> -n <namespace>", "cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF", "podman build -t <registry>/<container_disk_name>:latest .", "podman push <registry>/<container_disk_name>:latest", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"", "apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 1 secretKey: \"\" 2", "oc apply -f data-source-secret.yaml", "oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: registry: url: \"docker://kubevirt/fedora-cloud-container-disk-demo:latest\" 5 secretRef: data-source-secret 6 certConfigMap: tls-certs 7 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}", "oc create -f vm-fedora-datavolume.yaml", "oc get pods", "oc describe dv fedora-dv 1", "virtctl console vm-fedora-datavolume", "apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 1 secretKey: \"\" 2", "oc apply -f data-source-secret.yaml", "oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: http: url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 5 registry: url: \"docker://kubevirt/fedora-cloud-container-disk-demo:latest\" 6 secretRef: data-source-secret 7 certConfigMap: tls-certs 8 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}", "oc create -f vm-fedora-datavolume.yaml", "oc get pods", "oc describe dv fedora-dv 1", "virtctl console vm-fedora-datavolume", "%WINDIR%\\System32\\Sysprep\\sysprep.exe /generalize /shutdown /oobe /mode:vm", "virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3", "oc get dvs", "yum install -y qemu-guest-agent", "systemctl enable --now qemu-guest-agent", "oc get vm <vm_name>", "net start", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "virtctl start <vm> -n <namespace>", "oc apply -f <vm.yaml>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible cdi.kubevirt.io/clonePhase: Succeeded cdi.kubevirt.io/cloneType: copy", "NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible", "kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com", "kind: StorageClass apiVersion: storage.k8s.io/v1 provisioner: openshift-storage.rbd.csi.ceph.com", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: \"<source_namespace>\" 2 name: \"<my_vm_disk>\" 3 storage: {}", "oc create -f <datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: \"<source_pvc>\" 3", "oc create -f <vm-clone-datavolumetemplate>.yaml", "virtctl vnc <vm_name>", "virtctl vnc <vm_name> -v 4", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/deployVmConsoleProxy\", \"value\": true}]'", "curl --header \"Authorization: Bearer USD{TOKEN}\" \"https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>\"", "{ \"token\": \"eyJhb...\" }", "export VNC_TOKEN=\"<token>\"", "oc login --token USD{VNC_TOKEN}", "virtctl vnc <vm_name> -n <namespace>", "virtctl delete serviceaccount --namespace \"<namespace>\" \"<vm_name>-vnc-access\"", "kubectl create rolebinding \"USD{ROLE_BINDING_NAME}\" --clusterrole=\"token.kubevirt.io:generate\" --user=\"USD{USER_NAME}\"", "kubectl create rolebinding \"USD{ROLE_BINDING_NAME}\" --clusterrole=\"token.kubevirt.io:generate\" --serviceaccount=\"USD{SERVICE_ACCOUNT_NAME}\"", "virtctl console <vm_name>", "virtctl create vm --instancetype <my_instancetype> --preference <my_preference>", "virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference>", "virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference", "oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config user: cloud-user name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys", "virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1", "virtctl -n my-namespace ssh cloud-user@example-vm -i my-key", "Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p", "ssh <user>@vm/<vm_name>.<namespace>", "virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1", "virtctl expose vm example-vm --name example-service --type NodePort --port 22", "oc get service", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "ssh <user_name>@<ip_address> -p <port> 1", "oc describe vm <vm_name> -n <namespace>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default", "ssh <user_name>@<ip_address> -i <ssh_key>", "ssh [email protected] -i ~/.ssh/id_rsa_cloud-user", "oc edit vm <vm_name>", "oc apply vm <vm_name> -n <namespace>", "oc edit vm <vm_name> -n <namespace>", "disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default", "oc delete vm <vm_name>", "apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3", "oc create -f example-export.yaml", "oc get vmexport example-export -o yaml", "apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: \"\" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:10:09Z\" reason: podReady status: \"True\" type: Ready - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:09:02Z\" reason: pvcBound status: \"True\" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export", "oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1", "oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1", "oc get vmexport <export_name> -o yaml", "apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: # links: external: # manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: # manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export", "curl --cacert cacert.crt <secret_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "curl --cacert cacert.crt <all_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "oc get vmis -A", "oc delete vmi <vmi_name>", "kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name>", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2", "apiVersion: tekton.dev/v1 kind: PipelineRun metadata: generateName: windows11-installer-run- labels: pipelinerun: windows11-installer-run spec: params: - name: winImageDownloadURL value: <windows_image_download_url> 1 - name: acceptEula value: false 2 pipelineRef: params: - name: catalog value: redhat-pipelines - name: type value: artifact - name: kind value: pipeline - name: name value: windows-efi-installer - name: version value: 4.16 resolver: hub taskRunSpecs: - pipelineTaskName: modify-windows-iso-file PodTemplate: securityContext: fsGroup: 107 runAsUser: 107", "oc apply -f windows11-customize-run.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1", "metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2", "metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname", "metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value", "metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: {}", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: matchLabels: <first_example_key>: \"true\" <second_example_key>: \"true\"", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration:", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3", "certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s", "error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: \"EPYC\"", "apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2", "oc create -f <file_name>.yaml", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/featureGates/VMPersistentState\", \"value\": true}]'", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm spec: template: spec: domain: firmware: bootloader: efi: persistent: true", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", 2 \"type\": \"bridge\", 3 \"bridge\": \"bridge-interface\", 4 \"macspoofchk\": false, 5 \"vlan\": 100, 6 \"disableContainerInterface\": true, \"preserveDefaultVlan\": false 7 }", "oc create -f pxe-net-conf.yaml", "interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1", "devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2", "networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf", "oc create -f vmi-pxe-boot.yaml", "virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created", "oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running", "virtctl vnc vmi-pxe-boot", "virtctl console vmi-pxe-boot", "ip addr", "3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff", "kind: VirtualMachine spec: domain: resources: requests: memory: \"4Gi\" 1 memory: hugepages: pageSize: \"1Gi\" 2", "oc apply -f <virtual_machine>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1", "apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: running: true template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio", "oc get pods", "NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m", "oc describe pod virt-launcher-vm-fedora-dpc87", "[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...]", "oc label node <node_name> nvidia.com/gpu.deploy.operands=false 1", "oc describe node <node_name>", "oc get pods -n nvidia-gpu-operator", "NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-sandbox-validator-kxwj7 1/1 Terminating 0 9d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d nvidia-vfio-manager-zqtck 1/1 Terminating 0 9d", "oc get pods -n nvidia-gpu-operator", "NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "lspci -nnv | grep -i nvidia", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "variant: openshift version: 4.16.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci", "butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml", "oc apply -f 100-worker-vfiopci.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s", "lspci -nnk -d 10de:", "04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: \"10DE:1DB6\" 3 resourceName: \"nvidia.com/GV100GL_Tesla_V100\" 4 - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\" - pciDeviceSelector: \"8086:6F54\" resourceName: \"intel.com/qat\" externalResourceProvider: true 5", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: \"10DE:1DB6\" resourceName: \"nvidia.com/GV100GL_Tesla_V100\" - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\"", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1", "lspci -nnk | grep NVIDIA", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "kind: ClusterPolicy apiVersion: nvidia.com/v1 metadata: name: gpu-cluster-policy spec: operator: defaultRuntime: crio use_ocp_driver_toolkit: true initContainer: {} sandboxWorkloads: enabled: true defaultWorkload: vm-vgpu driver: enabled: false 1 dcgmExporter: {} dcgm: enabled: true daemonsets: {} devicePlugin: {} gfd: {} migManager: enabled: true nodeStatusExporter: enabled: true mig: strategy: single toolkit: enabled: true validator: plugin: env: - name: WITH_WORKLOAD value: \"true\" vgpuManager: enabled: true 2 repository: <vgpu_container_registry> 3 image: <vgpu_image_name> version: nvidia-vgpu-manager vgpuDeviceManager: enabled: false 4 config: name: vgpu-devices-config default: default sandboxDevicePlugin: enabled: false 5 vfioManager: enabled: false 6", "mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108", "nvidia-105 nvidia-108 nvidia-217 nvidia-299", "mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-22 - nvidia-223 - nvidia-224", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-231 nodeMediatedDeviceTypes: - mediatedDeviceTypes: - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q", "spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDeviceTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value>", "oc get USDNODE -o json | jq '.status.allocatable | with_entries(select(.key | startswith(\"nvidia.com/\"))) | with_entries(select(.value != \"0\"))'", "permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2", "oc describe node <node_name>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-2Q name: gpu2", "lspci -nnk | grep <device_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: \"true\"", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle mode: Predictive 1", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/tuningPolicy\", \"value\": \"highBurst\"}]'", "oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged -n openshift-cnv -o go-template --template='{{range USDconfig, USDvalue := .spec.configuration}} {{if eq USDconfig \"apiConfiguration\" \"webhookConfiguration\" \"controllerConfiguration\" \"handlerConfiguration\"}} {{\"\\n\"}} {{USDconfig}} = {{USDvalue}} {{end}} {{end}} {{\"\\n\"}}", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "spec: resourceRequirements: vmiCPUAllocationRatio: 1 1", "apiVersion: kubevirt.io/v1 kind: VM spec: domain: devices: networkInterfaceMultiqueue: true", "virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]", "virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>", "oc edit pvc <pvc_name>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: \"<storage_class>\" 2", "oc create -f <blank-image-datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: domain: devices: disks: - disk: bus: virtio name: rootdisk errorPolicy: report 1 disk1: disk_one 2 - disk: bus: virtio name: cloudinitdisk disk2: disk_two shareable: true 3 interfaces: - masquerade: {} name: default", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report 1 lun: 2 bus: scsi reservation: true 3 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report lun: 1 bus: scsi reservation: true 2 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/featureGates/persistentReservation\", \"value\": true}]'", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}", "oc create -f <vm-name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4", "oc create -f example-vm-ipv6.yaml", "oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "apiVersion: v1 kind: Service metadata: name: mysubdomain 1 spec: selector: expose: me 2 clusterIP: None 3 ports: 4 - protocol: TCP port: 1234 targetPort: 1234", "oc create -f headless_service.yaml", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: template: metadata: labels: expose: me 1 spec: hostname: \"myvm\" 2 subdomain: \"mysubdomain\" 3", "virtctl console vm-fedora", "ping myvm.mysubdomain.<namespace>.svc.cluster.local", "PING myvm.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data. 64 bytes from myvm.mysubdomain.default.svc.cluster.local (10.244.0.57): icmp_seq=1 ttl=64 time=0.029 ms", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bridge-network 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1 2 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"bridge-network\", 3 \"type\": \"bridge\", 4 \"bridge\": \"br1\", 5 \"macspoofchk\": false, 6 \"vlan\": 100, 7 \"disableContainerInterface\": true, \"preserveDefaultVlan\": false 8 }", "oc create -f network-attachment-definition.yaml 1", "oc get network-attachment-definition bridge-network", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - bridge: {} name: bridge-net 1 networks: - name: bridge-net 2 multus: networkName: a-bridge-network 3", "oc apply -f example-vm.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: \"<trust_vf>\" 11 capabilities: <capabilities> 12", "oc create -f <name>-sriov-network.yaml", "oc get net-attach-def -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: domain: devices: interfaces: - name: nic1 1 sriov: {} networks: - name: nic1 2 multus: networkName: sriov-network 3", "oc apply -f <vm_sriov>.yaml 1", "oc label node <node_name> node-role.kubernetes.io/worker-dpdk=\"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-dpdk labels: machineconfiguration.openshift.io/role: worker-dpdk spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-dpdk nodeSelector: matchLabels: node-role.kubernetes.io/worker-dpdk: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: profile-1 spec: cpu: isolated: 4-39,44-79 reserved: 0-3,40-43 globallyDisableIrqLoadBalancing: true hugepages: defaultHugepagesSize: 1G pages: - count: 8 node: 0 size: 1G net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-dpdk: \"\" numa: topologyPolicy: single-numa-node", "oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{\"\\n\"}'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/defaultRuntimeClass\", \"value\":\"<runtimeclass-name>\"}]'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/featureGates/alignCPUs\", \"value\": true}]'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-1 namespace: openshift-sriov-network-operator spec: resourceName: intel_nics_dpdk deviceType: vfio-pci mtu: 9000 numVfs: 4 priority: 99 nicSelector: vendor: \"8086\" deviceID: \"1572\" pfNames: - eno3 rootDevices: - \"0000:19:00.2\" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"", "oc label node <node_name> node-role.kubernetes.io/worker-dpdk-", "oc delete mcp worker-dpdk", "oc create ns dpdk-checkup-ns", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-sriovnetwork namespace: openshift-sriov-network-operator spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } networkNamespace: dpdk-checkup-ns 1 resourceName: intel_nics_dpdk 2 spoofChk: \"off\" trust: \"on\" vlan: 1019", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-dpdk-vm spec: running: true template: metadata: annotations: cpu-load-balancing.crio.io: disable 1 cpu-quota.crio.io: disable 2 irq-load-balancing.crio.io: disable 3 spec: domain: cpu: sockets: 1 4 cores: 5 5 threads: 2 dedicatedCpuPlacement: true isolateEmulatorThread: true interfaces: - masquerade: {} name: default - model: virtio name: nic-east pciAddress: '0000:07:00.0' sriov: {} networkInterfaceMultiqueue: true rng: {} memory: hugepages: pageSize: 1Gi 6 guest: 8Gi networks: - name: default pod: {} - multus: networkName: dpdk-net 7 name: nic-east", "oc apply -f <file_name>.yaml", "grubby --update-kernel=ALL --args=\"default_hugepagesz=1GB hugepagesz=1G hugepages=8\"", "dnf install -y tuned-profiles-cpu-partitioning", "echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf", "tuned-adm profile cpu-partitioning", "dnf install -y driverctl", "driverctl set-override 0000:07:00.0 vfio-pci", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |2 { \"cniVersion\": \"0.3.1\", 1 \"name\": \"my-namespace-l2-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\":\"layer2\", 4 \"mtu\": 1300, 5 \"netAttachDefName\": \"my-namespace/l2-network\" 6 }", "oc apply -f <filename>.yaml", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet-network 3 bridge: br-ex 4 state: present 5", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: default spec: config: |2 { \"cniVersion\": \"0.3.1\", 1 \"name\": \"localnet-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\": \"localnet\", 4 \"netAttachDefName\": \"default/localnet-network\" 5 }", "oc apply -f <filename>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: running: true template: spec: domain: devices: interfaces: - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: secondary 2 multus: networkName: <nad_name> 3 nodeSelector: node-role.kubernetes.io/worker: '' 4", "oc apply -f <filename>.yaml", "virtctl start <vm_name> -n <namespace>", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # new interface - name: <secondary_nic> 1 bridge: {} networks: - name: defaultnetwork pod: {} # new network - name: <secondary_nic> 2 multus: networkName: <nad_name> 3", "virtctl migrate <vm_name>", "oc get VirtualMachineInstanceMigration -w", "NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedora", "oc get vmi vm-fedora -ojsonpath=\"{ @.status.interfaces }\"", "[ { \"infoSource\": \"domain, guest-agent\", \"interfaceName\": \"eth0\", \"ipAddress\": \"10.130.0.195\", \"ipAddresses\": [ \"10.130.0.195\", \"fd02:0:0:3::43c\" ], \"mac\": \"52:54:00:0e:ab:25\", \"name\": \"default\", \"queueCount\": 1 }, { \"infoSource\": \"domain, guest-agent, multus-status\", \"interfaceName\": \"eth1\", \"mac\": \"02:d8:b8:00:00:2a\", \"name\": \"bridge-interface\", 1 \"queueCount\": 1 } ]", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # set the interface state to absent - name: <secondary_nic> state: absent 1 bridge: {} networks: - name: defaultnetwork pod: {} - name: <secondary_nic> multus: networkName: <nad_name>", "virtctl migrate <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: \"true\" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk", "oc apply -f <vm_name>.yaml 1", "apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP", "oc create -f <service_name>.yaml 1", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true", "kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2", "oc describe vmi <vmi_name>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true 1", "oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP'", "oc get service -n openshift-cnv", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-lb LoadBalancer 172.30.27.5 10.46.41.94 53:31829/TCP 5s", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true kubeSecondaryDNSNameServerIP: \"10.46.41.94\" 1", "oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}'", "openshift.example.com", "vm.<FQDN>. IN NS ns.vm.<FQDN>.", "ns.vm.<FQDN>. IN A <kubeSecondaryDNSNameServerIP>", "oc get dnses.config.openshift.io cluster -o json | jq .spec.baseDomain", "oc get vm -n <namespace> <vm_name> -o yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: true template: spec: domain: devices: interfaces: - bridge: {} name: example-nic networks: - multus: networkName: bridge-conf name: example-nic 1", "ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn>", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-", "oc edit storageprofile <storage_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class>", "oc get storageprofile", "oc describe storageprofile <name>", "Name: ocs-storagecluster-ceph-rbd-virtualization Namespace: Labels: app=containerized-data-importer app.kubernetes.io/component=storage app.kubernetes.io/managed-by=cdi-controller app.kubernetes.io/part-of=hyperconverged-cluster app.kubernetes.io/version=4.17.2 cdi.kubevirt.io= Annotations: <none> API Version: cdi.kubevirt.io/v1beta1 Kind: StorageProfile Metadata: Creation Timestamp: 2023-11-13T07:58:02Z Generation: 2 Owner References: API Version: cdi.kubevirt.io/v1beta1 Block Owner Deletion: true Controller: true Kind: CDI Name: cdi-kubevirt-hyperconverged UID: 2d6f169a-382c-4caf-b614-a640f2ef8abb Resource Version: 4186799537 UID: 14aef804-6688-4f2e-986b-0297fd3aaa68 Spec: Status: Claim Property Sets: 1 accessModes: ReadWriteMany volumeMode: Block accessModes: ReadWriteOnce volumeMode: Block accessModes: ReadWriteOnce volumeMode: Filesystem Clone Strategy: csi-clone 2 Data Import Cron Source Format: snapshot 3 Provisioner: openshift-storage.rbd.csi.ceph.com Snapshot Class: ocs-storagecluster-rbdplugin-snapclass Storage Class: ocs-storagecluster-ceph-rbd-virtualization Events: <none>", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": false}]'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": true}]'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel8-image-cron spec: template: spec: storageClassName: <new_storage_class> 1 schedule: \"0 */12 * * *\" 2 managedDataSource: <data_source> 3", "For the custom image to be detected as an available boot source, the value of the `spec.dataVolumeTemplates.spec.sourceRef.name` parameter in the VM template must match this value.", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE csi-manila-ceph manila.csi.openstack.org Delete Immediate false 11d hostpath-csi-basic (default) kubevirt.io.hostpath-provisioner Delete WaitForFirstConsumer false 11d 1", "oc patch storageclass <current_default_storage_class> -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"false\"}}}' 1", "oc patch storageclass <new_storage_class> -p '{\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}' 1", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos7-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" 1 labels: instancetype.kubevirt.io/default-preference: centos.7 instancetype.kubevirt.io/default-instancetype: u1.medium spec: schedule: \"0 */12 * * *\" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi garbageCollect: Outdated managedDataSource: centos7 4", "oc edit storageprofile <storage_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: spec: dataImportCronSourceFormat: snapshot", "oc get storageprofile <storage_class> -oyaml", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: 'false' name: rhel8-image-cron", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: status: dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: centos-7-image-cron spec: garbageCollect: Outdated managedDataSource: centos7 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 1 - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream8 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:8 storage: resources: requests: storage: 30Gi status: {} status: {} 2", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "spec: filesystemOverhead: global: \"<new_global_value>\" 1 storageClass: <storage_class_name>: \"<new_value_for_this_storage_class>\" 2", "oc get cdiconfig -o yaml", "oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: \"/var/myvolumes\" 2 workload: nodeSelector: kubernetes.io/os: linux", "oc create -f hpp_cr.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3", "oc create -f storageclass_csi.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: \"/var/myvolumes\" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux", "oc create -f hpp_pvc_template_pool.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: [\"cdi.kubevirt.io\"] resources: [\"datavolumes/source\"] verbs: [\"*\"]", "oc create -f <datavolume-cloner.yaml> 1", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io", "oc create -f <datavolume-cloner.yaml> 1", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: \"500m\" memory: \"2Gi\" requests: cpu: \"250m\" memory: \"1Gi\"", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: \"<storage_class>\" 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 registry: url: <image_url> 2 storage: resources: requests: storage: 1Gi preallocation: true", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: datavolume-example annotations: v1.multus-cni.io/default-network: bridge-network 1", "Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 64Mi 1 completionTimeoutPerGiB: 800 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 2 4 progressTimeout: 150 5", "oc edit vm <vm_name>", "apiVersion: migrations.kubevirt.io/v1alpha1 kind: VirtualMachine metadata: name: <vm_name> namespace: default labels: app: my-app environment: production spec: template: metadata: labels: kubevirt.io/domain: <vm_name> kubevirt.io/size: large kubevirt.io/environment: production", "apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: <migration_policy> spec: selectors: namespaceSelector: 1 hpc-workloads: \"True\" xyz-workloads-type: \"\" virtualMachineInstanceSelector: 2 kubevirt.io/environment: \"production\"", "oc create -f <migration_policy>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: <migration_name> spec: vmiName: <vm_name>", "oc create -f <migration_name>.yaml", "oc describe vmi <vm_name> -n <namespace>", "Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true", "oc delete vmim migration-job", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: evictionStrategy: LiveMigrateIfPossible 1", "virtctl restart <vm_name> -n <namespace>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: evictionStrategy: LiveMigrate", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always", "\"486\" Conroe athlon core2duo coreduo kvm32 kvm64 n270 pentium pentium2 pentium3 pentiumpro phenom qemu32 qemu64", "apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc", "aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave", "aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - \"<obsolete_cpu_1>\" - \"<obsolete_cpu_2>\" minCPUModel: \"<minimum_cpu_model>\" 2", "oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1", "oc adm cordon <node_name>", "oc adm drain <node_name> --force=true", "oc delete node <node_name>", "oc get vmis -A", "--- apiVersion: v1 kind: ServiceAccount metadata: name: vm-latency-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-vm-latency-checker rules: - apiGroups: [\"kubevirt.io\"] resources: [\"virtualmachineinstances\"] verbs: [\"get\", \"create\", \"delete\"] - apiGroups: [\"subresources.kubevirt.io\"] resources: [\"virtualmachineinstances/console\"] verbs: [\"get\"] - apiGroups: [\"k8s.cni.cncf.io\"] resources: [\"network-attachment-definitions\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-vm-latency-checker subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kubevirt-vm-latency-checker apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [\"get\", \"update\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kiagnose-configmap-access apiGroup: rbac.authorization.k8s.io", "oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml 1", "apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config labels: kiagnose/checkup-type: kubevirt-vm-latency data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: \"blue-network\" 1 spec.param.maxDesiredLatencyMilliseconds: \"10\" 2 spec.param.sampleDurationSeconds: \"5\" 3 spec.param.sourceNode: \"worker1\" 4 spec.param.targetNode: \"worker2\" 5", "oc apply -n <target_namespace> -f <latency_config_map>.yaml", "apiVersion: batch/v1 kind: Job metadata: name: kubevirt-vm-latency-checkup labels: kiagnose/checkup-type: kubevirt-vm-latency spec: backoffLimit: 0 template: spec: serviceAccountName: vm-latency-checkup-sa restartPolicy: Never containers: - name: vm-latency-checkup image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.16.0 securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] runAsNonRoot: true seccompProfile: type: \"RuntimeDefault\" env: - name: CONFIGMAP_NAMESPACE value: <target_namespace> - name: CONFIGMAP_NAME value: kubevirt-vm-latency-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid", "oc apply -n <target_namespace> -f <latency_job>.yaml", "oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m", "oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config namespace: <target_namespace> labels: kiagnose/checkup-type: kubevirt-vm-latency data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: \"blue-network\" spec.param.maxDesiredLatencyMilliseconds: \"10\" spec.param.sampleDurationSeconds: \"5\" spec.param.sourceNode: \"worker1\" spec.param.targetNode: \"worker2\" status.succeeded: \"true\" status.failureReason: \"\" status.completionTimestamp: \"2022-01-01T09:00:00Z\" status.startTimestamp: \"2022-01-01T09:00:07Z\" status.result.avgLatencyNanoSec: \"177000\" status.result.maxLatencyNanoSec: \"244000\" 1 status.result.measurementDurationSec: \"5\" status.result.minLatencyNanoSec: \"135000\" status.result.sourceNode: \"worker1\" status.result.targetNode: \"worker2\"", "oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>", "oc delete job -n <target_namespace> kubevirt-vm-latency-checkup", "oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config", "oc delete -f <latency_sa_roles_rolebinding>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubevirt-storage-checkup-clustereader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-reader subjects: - kind: ServiceAccount name: storage-checkup-sa namespace: <target_namespace> 1", "--- apiVersion: v1 kind: ServiceAccount metadata: name: storage-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: storage-checkup-role rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [\"get\", \"update\"] - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachines\" ] verbs: [ \"create\", \"delete\" ] - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstances\" ] verbs: [ \"get\" ] - apiGroups: [ \"subresources.kubevirt.io\" ] resources: [ \"virtualmachineinstances/addvolume\", \"virtualmachineinstances/removevolume\" ] verbs: [ \"update\" ] - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstancemigrations\" ] verbs: [ \"create\" ] - apiGroups: [ \"cdi.kubevirt.io\" ] resources: [ \"datavolumes\" ] verbs: [ \"create\", \"delete\" ] - apiGroups: [ \"\" ] resources: [ \"persistentvolumeclaims\" ] verbs: [ \"delete\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: storage-checkup-role subjects: - kind: ServiceAccount name: storage-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: storage-checkup-role", "oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yaml", "--- apiVersion: v1 kind: ConfigMap metadata: name: storage-checkup-config namespace: USDCHECKUP_NAMESPACE data: spec.timeout: 10m spec.param.storageClass: ocs-storagecluster-ceph-rbd-virtualization spec.param.vmiTimeout: 3m --- apiVersion: batch/v1 kind: Job metadata: name: storage-checkup namespace: USDCHECKUP_NAMESPACE spec: backoffLimit: 0 template: spec: serviceAccount: storage-checkup-sa restartPolicy: Never containers: - name: storage-checkup image: quay.io/kiagnose/kubevirt-storage-checkup:main imagePullPolicy: Always env: - name: CONFIGMAP_NAMESPACE value: USDCHECKUP_NAMESPACE - name: CONFIGMAP_NAME value: storage-checkup-config", "oc apply -n <target_namespace> -f <storage_configmap_job>.yaml", "oc wait job storage-checkup -n <target_namespace> --for condition=complete --timeout 10m", "oc get configmap storage-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: storage-checkup-config labels: kiagnose/checkup-type: kubevirt-storage data: spec.timeout: 10m status.succeeded: \"true\" 1 status.failureReason: \"\" 2 status.startTimestamp: \"2023-07-31T13:14:38Z\" 3 status.completionTimestamp: \"2023-07-31T13:19:41Z\" 4 status.result.cnvVersion: 4.16.2 5 status.result.defaultStorageClass: trident-nfs 6 status.result.goldenImagesNoDataSource: <data_import_cron_list> 7 status.result.goldenImagesNotUpToDate: <data_import_cron_list> 8 status.result.ocpVersion: 4.16.0 9 status.result.pvcBound: \"true\" 10 status.result.storageProfileMissingVolumeSnapshotClass: <storage_class_list> 11 status.result.storageProfilesWithEmptyClaimPropertySets: <storage_profile_list> 12 status.result.storageProfilesWithSmartClone: <storage_profile_list> 13 status.result.storageProfilesWithSpecClaimPropertySets: <storage_profile_list> 14 status.result.storageProfilesWithRWX: |- ocs-storagecluster-ceph-rbd ocs-storagecluster-ceph-rbd-virtualization ocs-storagecluster-cephfs trident-iscsi trident-minio trident-nfs windows-vms status.result.vmBootFromGoldenImage: VMI \"vmi-under-test-dhkb8\" successfully booted status.result.vmHotplugVolume: |- VMI \"vmi-under-test-dhkb8\" hotplug volume ready VMI \"vmi-under-test-dhkb8\" hotplug volume removed status.result.vmLiveMigration: VMI \"vmi-under-test-dhkb8\" migration completed status.result.vmVolumeClone: 'DV cloneType: \"csi-clone\"' status.result.vmsWithNonVirtRbdStorageClass: <vm_list> 15 status.result.vmsWithUnsetEfsStorageClass: <vm_list> 16", "oc delete job -n <target_namespace> storage-checkup", "oc delete config-map -n <target_namespace> storage-checkup-config", "oc delete -f <storage_sa_roles_rolebinding>.yaml", "--- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [ \"get\", \"update\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kiagnose-configmap-access --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-dpdk-checker rules: - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstances\" ] verbs: [ \"create\", \"get\", \"delete\" ] - apiGroups: [ \"subresources.kubevirt.io\" ] resources: [ \"virtualmachineinstances/console\" ] verbs: [ \"get\" ] - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [ \"create\", \"delete\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-dpdk-checker subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubevirt-dpdk-checker", "oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config labels: kiagnose/checkup-type: kubevirt-dpdk data: spec.timeout: 10m spec.param.networkAttachmentDefinitionName: <network_name> 1 spec.param.trafficGenContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0 2 spec.param.vmUnderTestContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0\" 3", "oc apply -n <target_namespace> -f <dpdk_config_map>.yaml", "apiVersion: batch/v1 kind: Job metadata: name: dpdk-checkup labels: kiagnose/checkup-type: kubevirt-dpdk spec: backoffLimit: 0 template: spec: serviceAccountName: dpdk-checkup-sa restartPolicy: Never containers: - name: dpdk-checkup image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.16.0 imagePullPolicy: Always securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] runAsNonRoot: true seccompProfile: type: \"RuntimeDefault\" env: - name: CONFIGMAP_NAMESPACE value: <target-namespace> - name: CONFIGMAP_NAME value: dpdk-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid", "oc apply -n <target_namespace> -f <dpdk_job>.yaml", "oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m", "oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config labels: kiagnose/checkup-type: kubevirt-dpdk data: spec.timeout: 10m spec.param.NetworkAttachmentDefinitionName: \"dpdk-network-1\" spec.param.trafficGenContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0\" spec.param.vmUnderTestContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0\" status.succeeded: \"true\" 1 status.failureReason: \"\" 2 status.startTimestamp: \"2023-07-31T13:14:38Z\" 3 status.completionTimestamp: \"2023-07-31T13:19:41Z\" 4 status.result.trafficGenSentPackets: \"480000000\" 5 status.result.trafficGenOutputErrorPackets: \"0\" 6 status.result.trafficGenInputErrorPackets: \"0\" 7 status.result.trafficGenActualNodeName: worker-dpdk1 8 status.result.vmUnderTestActualNodeName: worker-dpdk2 9 status.result.vmUnderTestReceivedPackets: \"480000000\" 10 status.result.vmUnderTestRxDroppedPackets: \"0\" 11 status.result.vmUnderTestTxDroppedPackets: \"0\" 12", "oc delete job -n <target_namespace> dpdk-checkup", "oc delete config-map -n <target_namespace> dpdk-checkup-config", "oc delete -f <dpdk_sa_roles_rolebinding>.yaml", "dnf install libguestfs-tools", "composer-cli distros list", "usermod -a -G weldr user", "newgrp weldr", "cat << EOF > dpdk-vm.toml name = \"dpdk_image\" description = \"Image to use with the DPDK checkup\" version = \"0.0.1\" distro = \"rhel-87\" [[customizations.user]] name = \"root\" password = \"redhat\" [[packages]] name = \"dpdk\" [[packages]] name = \"dpdk-tools\" [[packages]] name = \"driverctl\" [[packages]] name = \"tuned-profiles-cpu-partitioning\" [customizations.kernel] append = \"default_hugepagesz=1GB hugepagesz=1G hugepages=1\" [customizations.services] disabled = [\"NetworkManager-wait-online\", \"sshd\"] EOF", "composer-cli blueprints push dpdk-vm.toml", "composer-cli compose start dpdk_image qcow2", "composer-cli compose status", "composer-cli compose image <UUID>", "cat <<EOF >customize-vm #!/bin/bash Setup hugepages mount mkdir -p /mnt/huge echo \"hugetlbfs /mnt/huge hugetlbfs defaults,pagesize=1GB 0 0\" >> /etc/fstab Create vfio-noiommu.conf echo \"options vfio enable_unsafe_noiommu_mode=1\" > /etc/modprobe.d/vfio-noiommu.conf Enable guest-exec,guest-exec-status on the qemu-guest-agent configuration sed -i '/^BLACKLIST_RPC=/ { s/guest-exec-status//; s/guest-exec//g }' /etc/sysconfig/qemu-ga sed -i '/^BLACKLIST_RPC=/ { s/,\\+/,/g; s/^,\\|,USD//g }' /etc/sysconfig/qemu-ga EOF", "virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabel", "cat << EOF > Dockerfile FROM scratch COPY --chown=107:107 <UUID>-disk.qcow2 /disk/ EOF", "podman build . -t dpdk-rhel:latest", "podman push dpdk-rhel:latest", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1", "kubevirt_vmsnapshot_disks_restored_from_source{vm_name=\"simple-vm\", vm_namespace=\"default\"} 1", "kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name=\"simple-vm\", vm_namespace=\"default\"} 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0 1", "kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7", "oc create -f node-exporter-service.yaml", "wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz", "sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz --directory /usr/bin --strip 1 \"*/node_exporter\"", "[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target", "sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service", "curl http://localhost:9100/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5244e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.0449e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.7913e-05", "spec: template: metadata: labels: monitor: metrics", "oc get service -n <namespace> <node-exporter-service>", "curl http://<172.30.226.162:9100>/metrics | grep -vE \"^#|^USD\"", "node_arp_entries{device=\"eth0\"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name=\"0\",type=\"Processor\"} 0 node_cooling_device_max_state{name=\"0\",type=\"Processor\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"nice\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"user\"} 0 node_cpu_seconds_total{cpu=\"0\",mode=\"idle\"} 1.10586485e+06 node_cpu_seconds_total{cpu=\"0\",mode=\"iowait\"} 37.61 node_cpu_seconds_total{cpu=\"0\",mode=\"irq\"} 233.91 node_cpu_seconds_total{cpu=\"0\",mode=\"nice\"} 551.47 node_cpu_seconds_total{cpu=\"0\",mode=\"softirq\"} 87.3 node_cpu_seconds_total{cpu=\"0\",mode=\"steal\"} 86.12 node_cpu_seconds_total{cpu=\"0\",mode=\"system\"} 464.15 node_cpu_seconds_total{cpu=\"0\",mode=\"user\"} 1075.2 node_disk_discard_time_seconds_total{device=\"vda\"} 0 node_disk_discard_time_seconds_total{device=\"vdb\"} 0 node_disk_discarded_sectors_total{device=\"vda\"} 0 node_disk_discarded_sectors_total{device=\"vdb\"} 0 node_disk_discards_completed_total{device=\"vda\"} 0 node_disk_discards_completed_total{device=\"vdb\"} 0 node_disk_discards_merged_total{device=\"vda\"} 0 node_disk_discards_merged_total{device=\"vdb\"} 0 node_disk_info{device=\"vda\",major=\"252\",minor=\"0\"} 1 node_disk_info{device=\"vdb\",major=\"252\",minor=\"16\"} 1 node_disk_io_now{device=\"vda\"} 0 node_disk_io_now{device=\"vdb\"} 0 node_disk_io_time_seconds_total{device=\"vda\"} 174 node_disk_io_time_seconds_total{device=\"vdb\"} 0.054 node_disk_io_time_weighted_seconds_total{device=\"vda\"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device=\"vdb\"} 0.039 node_disk_read_bytes_total{device=\"vda\"} 3.71867136e+08 node_disk_read_bytes_total{device=\"vdb\"} 366592 node_disk_read_time_seconds_total{device=\"vda\"} 19.128 node_disk_read_time_seconds_total{device=\"vdb\"} 0.039 node_disk_reads_completed_total{device=\"vda\"} 5619 node_disk_reads_completed_total{device=\"vdb\"} 96 node_disk_reads_merged_total{device=\"vda\"} 5 node_disk_reads_merged_total{device=\"vdb\"} 0 node_disk_write_time_seconds_total{device=\"vda\"} 240.66400000000002 node_disk_write_time_seconds_total{device=\"vdb\"} 0 node_disk_writes_completed_total{device=\"vda\"} 71584 node_disk_writes_completed_total{device=\"vdb\"} 0 node_disk_writes_merged_total{device=\"vda\"} 19761 node_disk_writes_merged_total{device=\"vdb\"} 0 node_disk_written_bytes_total{device=\"vda\"} 2.007924224e+09 node_disk_written_bytes_total{device=\"vdb\"} 0", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metrics", "oc create -f node-exporter-metrics-monitor.yaml", "oc expose service -n <namespace> <node_exporter_service_name>", "oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host", "NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org", "curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5382e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.1163e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.8546e-05 go_gc_duration_seconds{quantile=\"0.75\"} 4.9139e-05 go_gc_duration_seconds{quantile=\"1\"} 0.000189423", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: downwardMetrics: true", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: downwardMetrics: false", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/downwardMetrics\" \"value\": true}]'", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/downwardMetrics\" \"value\": false}]'", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: fedora namespace: default spec: dataVolumeTemplates: - metadata: name: fedora-volume spec: sourceRef: kind: DataSource name: fedora namespace: openshift-virtualization-os-images storage: resources: {} storageClassName: hostpath-csi-basic instancetype: name: u1.medium preference: name: fedora running: true template: metadata: labels: app.kubernetes.io/name: headless spec: domain: devices: downwardMetrics: {} 1 subdomain: headless volumes: - dataVolume: name: fedora-volume name: rootdisk - cloudInitNoCloud: userData: | #cloud-config chpasswd: expire: false password: '<password>' 2 user: fedora name: cloudinitdisk", "sudo sh -c 'printf \"GET /metrics/XML\\n\\n\" > /dev/virtio-ports/org.github.vhostmd.1'", "sudo cat /dev/virtio-ports/org.github.vhostmd.1", "sudo dnf install -y vm-dump-metrics", "sudo vm-dump-metrics", "<metrics> <metric type=\"string\" context=\"host\"> <name>HostName</name> <value>node01</value> [...] <metric type=\"int64\" context=\"host\" unit=\"s\"> <name>Time</name> <value>1619008605</value> </metric> <metric type=\"string\" context=\"host\"> <name>VirtualizationVendor</name> <value>kubevirt.io</value> </metric> </metrics>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: \"poweroff\" 1", "oc apply -f <file_name>.yaml", "lspci | grep watchdog -i", "echo c > /proc/sysrq-trigger", "pkill -9 watchdog", "yum install watchdog", "#watchdog-device = /dev/watchdog", "systemctl enable --now watchdog.service", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: guestAgentPing: {} 1 initialDelaySeconds: 120 2 periodSeconds: 20 3 timeoutSeconds: 10 4 failureThreshold: 3 5 successThreshold: 3 6", "oc create -f <file_name>.yaml", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 -- /usr/bin/gather", "oc adm must-gather --all-images", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 -- <environment_variable_1> <environment_variable_2> <script_name>", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 -- PROS=5 /usr/bin/gather 1", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 -- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details 1", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 /usr/bin/gather --images", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.16.5 /usr/bin/gather --instancetypes", "oc get events -n <namespace>", "oc describe <resource> <resource_name>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: logVerbosityConfig: kubevirt: virtAPI: 5 1 virtController: 4 virtHandler: 3 virtLauncher: 2 virtOperator: 6", "oc get pods -n openshift-cnv", "NAME READY STATUS RESTARTS AGE disks-images-provider-7gqbc 1/1 Running 0 32m disks-images-provider-vg4kx 1/1 Running 0 32m virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m virt-controller-76c784655f-7fp6m 1/1 Running 0 30m virt-controller-76c784655f-f4pbd 1/1 Running 0 30m virt-handler-2m86x 1/1 Running 0 30m virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m", "oc logs -n openshift-cnv <pod_name>", "{\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373695Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373726Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"setting rate limiter to 5 QPS and 10 Burst\",\"pos\":\"virt-handler.go:462\",\"timestamp\":\"2022-04-17T08:58:37.373782Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]\",\"pos\":\"cpu_plugin.go:96\",\"timestamp\":\"2022-04-17T08:58:37.390221Z\"} {\"component\":\"virt-handler\",\"level\":\"warning\",\"msg\":\"host model mode is expected to contain only one model\",\"pos\":\"cpu_plugin.go:103\",\"timestamp\":\"2022-04-17T08:58:37.390263Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"node-labeller is running\",\"pos\":\"node_labeller.go:94\",\"timestamp\":\"2022-04-17T08:58:37.391011Z\"}", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: virtualMachineOptions: disableSerialConsoleLog: true 1 #", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: logSerialConsole: true 1 #", "oc apply vm <vm_name>", "virtctl restart <vm_name> -n <namespace>", "oc logs -n <namespace> -l kubevirt.io/domain=<vm_name> --tail=-1 -c guest-console-log", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"storage\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"deployment\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"network\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"compute\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"schedule\"", "{log_type=~\".+\",kubernetes_container_name=~\"<container>|<container>\"} 1 |json|kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"", "{log_type=~\".+\", kubernetes_container_name=\"compute\"}|json |!= \"custom-ga-command\" 1", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |= \"error\" != \"timeout\"", "oc describe dv <DataVolume>", "Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: <snapshot_name> spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name>", "oc create -f <snapshot_name>.yaml", "oc wait <vm_name> <snapshot_name> --for condition=Ready", "oc describe vmsnapshot <snapshot_name>", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: <vm_restore> spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> virtualMachineSnapshotName: <snapshot_name>", "oc create -f <vm_restore>.yaml", "oc get vmrestore <vm_restore>", "apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1", "oc delete vmsnapshot <snapshot_name>", "oc get vmsnapshot", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'", "{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}", "oc get backupstoragelocations.velero.io -n openshift-adp", "NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/virtualization/index
Chapter 3. Optimize workload performance domains
Chapter 3. Optimize workload performance domains One of the key benefits of Ceph storage is the ability to support different types of workloads within the same cluster using Ceph performance domains. Dramatically different hardware configurations can be associated with each performance domain. Ceph system administrators can deploy storage pools on the appropriate performance domain, providing applications with storage tailored to specific performance and cost profiles. Selecting appropriately sized and optimized servers for these performance domains is an essential aspect of designing a Red Hat Ceph Storage cluster. The following lists provide the criteria Red Hat uses to identify optimal Red Hat Ceph Storage cluster configurations on storage servers. These categories are provided as general guidelines for hardware purchases and configuration decisions, and can be adjusted to satisfy unique workload blends. Actual hardware configurations chosen will vary depending on specific workload mix and vendor capabilities. IOPS optimized An IOPS-optimized storage cluster typically has the following properties: Lowest cost per IOPS. Highest IOPS per GB. 99th percentile latency consistency. Typically uses for an IOPS-optimized storage cluster are: Typically block storage. 3x replication for hard disk drives (HDDs) or 2x replication for solid state drives (SSDs). MySQL on OpenStack clouds. Throughput optimized A throughput-optimized storage cluster typically has the following properties: Lowest cost per MBps (throughput). Highest MBps per TB. Highest MBps per BTU. Highest MBps per Watt. 97th percentile latency consistency. Typically uses for an throughput-optimized storage cluster are: Block or object storage. 3x replication. Active performance storage for video, audio, and images. Streaming media. Cost and capacity optimized A cost- and capacity-optimized storage cluster typically has the following properties: Lowest cost per TB. Lowest BTU per TB. Lowest Watts required per TB. Typically uses for an cost- and capacity-optimized storage cluster are: Typically object storage. Erasure coding common for maximizing usable capacity Object archive. Video, audio, and image object repositories. How performance domains work To the Ceph client interface that reads and writes data, a Ceph storage cluster appears as a simple pool where the client stores data. However, the storage cluster performs many complex operations in a manner that is completely transparent to the client interface. Ceph clients and Ceph object storage daemons (Ceph OSDs, or simply OSDs) both use the controlled replication under scalable hashing (CRUSH) algorithm for storage and retrieval of objects. OSDs run on OSD hosts-the storage servers within the cluster. A CRUSH map describes a topography of cluster resources, and the map exists both on client nodes as well as Ceph Monitor (MON) nodes within the cluster. Ceph clients and Ceph OSDs both use the CRUSH map and the CRUSH algorithm. Ceph clients communicate directly with OSDs, eliminating a centralized object lookup and a potential performance bottleneck. With awareness of the CRUSH map and communication with their peers, OSDs can handle replication, backfilling, and recovery-allowing for dynamic failure recovery. Ceph uses the CRUSH map to implement failure domains. Ceph also uses the CRUSH map to implement performance domains, which simply take the performance profile of the underlying hardware into consideration. The CRUSH map describes how Ceph stores data, and it is implemented as a simple hierarchy (acyclic graph) and a ruleset. The CRUSH map can support multiple hierarchies to separate one type of hardware performance profile from another. The following examples describe performance domains. Hard disk drives (HDDs) are typically appropriate for cost- and capacity-focused workloads. Throughput-sensitive workloads typically use HDDs with Ceph write journals on solid state drives (SSDs). IOPS-intensive workloads such as MySQL and MariaDB often use SSDs. All of these performance domains can coexist in a Ceph storage cluster.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/hardware_guide/optimize-workload-performance-domains_hw
17.3. Renewing Subsystem Certificates
17.3. Renewing Subsystem Certificates There are two methods of renewing a certificate. Regenerating the certificate takes its original key and its original profile and request, and recreates an identical key with a new validity period and expiration date. Re-keying a certificate resubmits the initial certificate request to the original profile, but generates a new key pair. Administrator certificates can be renewed by being re-keyed. 17.3.1. Re-keying Certificates in the End-Entities Forms Subsystem certificates can be renewed directly in the end user enrollment forms, using the serial number of the original certificate. Renew the certificates in the CA's end-entities forms, as described in Section 5.4, "Renewing Certificates" . This requires the serial number of the subsystem certificate being renewed. Import the certificate into the subsystem's database, as described in Section 17.6.1, "Installing Certificates in the Certificate System Database" . The certificate can be imported using certutil or the console. For example: 17.3.2. Renewing Certificates in the Console The Java subsystems can renew any of their subsystem certificates through their administrative console. The process is exactly the same as requesting new subsystem certificates ( Section 17.2, "Requesting Certificates through the Console" ), with one crucial difference: renewal uses an existing key pair rather than generating a new one. Figure 17.1. Renewing Subsystem Certificate After renewing a certificate, then delete the original certificate from the database ( Section 17.6.3, "Deleting Certificates from the Database" ). 17.3.3. Renewing Certificates Using certutil certutil can be used to generate a certificate request using an existing key pair in the certificate database. The new certificate request can then be submitted through the regular profile pages for the CA to issue a renewed certificate. Note Encryption and signing certificates are created in a single step. However, the renewal process only renews one certificate at a time. To renew both certificates in a certificate pair, each one has to be renewed individually. Get the password for the token database. Open the certificate database directory of the instance whose certificate is being renewed. List the key and nickname for the certificate being renewed. In order to renew a certificate, the key pairs used to generate and the subject name given to the new certificate must be the same as the one in the old certificate. Copy the alias directory as a backup, then delete the original certificate from the certificate database. For example: Run the certutil command with the options set to the values in the existing certificate. The difference between generating a new certificate and key pair and renewing the certificate is the value of the -n option. To generate an entirely new request and key pair, then -k sets the key type and is used with -g , which sets the bit length. For a renewal request, the -n option uses the certificate nickname to access the existing key pair stored in the security database. For further details about the parameters, see the certutil (1) man page. Submit the certificate request and then retrieve it and install it, as described in Section 5.3, "Requesting and Receiving Certificates" . 17.3.4. Renewing System Certificates Certificate System does not automatically renew system certificates online while the PKI server is running. However, if a system certificate expires, Certificate System will fail to start. To renew system certificates: If the system certificate is expired: Create a temporary certificate: Import the temporary certificate into Certificate System's Network Security Services (NSS) database: Start Certificate System: Display the certificates and note the ID of the expired system certificate: Create the new permanent certificate: Stop Certificate System: Import the new certificate to replace the expired certificate: Start Certificate System:
[ "certutil -A -n \"ServerCert cert-example\" -t u,u,u -d /var/lib/pki/ instance_name /alias -a -i /tmp/example.cert", "cat /var/lib/pki/ instance_name /conf/password.conf internal=263163888660", "cd /var/lib/pki/ instance_name /alias", "certutil -K -d . certutil: Checking token \"NSS Certificate DB\" in slot \"NSS User Private Key and Certificate Services\" Enter Password or Pin for \"NSS Certificate DB\": < 0> rsa 69481646e38a6154dc105960aa24ccf61309d37d caSigningCert cert-pki-tomcat CA", "certutil -D -n \"ServerCert cert-example\" -d .", "certutil -d . -R -n \"NSS Certificate DB:cert-pki-tomcat CA\" -s \"cn=CA Authority,o=Example Domain\" -a -o example.req2.txt", "pki-server cert-create sslserver --temp", "pki-server cert-import sslserver", "pki-server start instance_name", "pki-server cert-find", "pki-server cert-create certificate_ID", "pki-server stop instance_name", "pki-server cert-import certificate_ID", "pki-server start instance_name" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/renewing-subsystem-certificates
Chapter 12. Restoring ceph-monitor quorum in OpenShift Data Foundation
Chapter 12. Restoring ceph-monitor quorum in OpenShift Data Foundation In some circumstances, the ceph-mons might lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that, at least one mon must be healthy. The following steps removes the unhealthy mons from quorum and enables you to form a quorum again with a single mon , then bring the quorum back to the original size. For example, if you have three mons and lose quorum, you need to remove the two bad mons from quorum, notify the good mon that it is the only mon in quorum, and then restart the good mon . Procedure Stop the rook-ceph-operator so that the mons are not failed over when you are modifying the monmap . Inject a new monmap . Warning You must inject the monmap very carefully. If run incorrectly, your cluster could be permanently destroyed. The Ceph monmap keeps track of the mon quorum. The monmap is updated to only contain the healthy mon. In this example, the healthy mon is rook-ceph-mon-b , while the unhealthy mons are rook-ceph-mon-a and rook-ceph-mon-c . Take a backup of the current rook-ceph-mon-b Deployment: Open the YAML file and copy the command and arguments from the mon container (see containers list in the following example). This is needed for the monmap changes. Cleanup the copied command and args fields to form a pastable command as follows: Note Make sure to remove the single quotes around the --log-stderr-prefix flag and the parenthesis around the variables being passed ROOK_CEPH_MON_HOST , ROOK_CEPH_MON_INITIAL_MEMBERS and ROOK_POD_IP ). Patch the rook-ceph-mon-b Deployment to stop the working of this mon without deleting the mon pod. Perform the following steps on the mon-b pod: Connect to the pod of a healthy mon and run the following commands: Set the variable. Extract the monmap to a file, by pasting the ceph mon command from the good mon deployment and adding the --extract-monmap=USD{monmap_path} flag. Review the contents of the monmap . Remove the bad mons from the monmap . In this example we remove mon0 and mon2 : Inject the modified monmap into the good mon , by pasting the ceph mon command and adding the --inject-monmap=USD{monmap_path} flag as follows: Exit the shell to continue. Edit the Rook configmaps . Edit the configmap that the operator uses to track the mons . Verify that in the data element you see three mons such as the following (or more depending on your moncount ): Delete the bad mons from the list to end up with a single good mon . For example: Save the file and exit. Now, you need to adapt a Secret which is used for the mons and other components. Set a value for the variable good_mon_id . For example: You can use the oc patch command to patch the rook-ceph-config secret and update the two key/value pairs mon_host and mon_initial_members . Note If you are using hostNetwork: true , you need to replace the mon_host var with the node IP the mon is pinned to ( nodeSelector ). This is because there is no rook-ceph-mon-* service created in that "mode". Restart the mon . You need to restart the good mon pod with the original ceph-mon command to pick up the changes. Use the oc replace command on the backup of the mon deployment YAML file: Note Option --force deletes the deployment and creates a new one. Verify the status of the cluster. The status should show one mon in quorum. If the status looks good, your cluster should be healthy again. Delete the two mon deployments that are no longer expected to be in quorum. For example: In this example the deployments to be deleted are rook-ceph-mon-a and rook-ceph-mon-c . Restart the operator. Start the rook operator again to resume monitoring the health of the cluster. Note It is safe to ignore the errors that a number of resources already exist. The operator automatically adds more mons to increase the quorum size again depending on the mon count.
[ "oc -n openshift-storage scale deployment rook-ceph-operator --replicas=0", "oc -n openshift-storage get deployment rook-ceph-mon-b -o yaml > rook-ceph-mon-b-deployment.yaml", "[...] containers: - args: - --fsid=41a537f2-f282-428e-989f-a9e07be32e47 - --keyring=/etc/ceph/keyring-store/keyring - --log-to-stderr=true - --err-to-stderr=true - --mon-cluster-log-to-stderr=true - '--log-stderr-prefix=debug ' - --default-log-to-file=false - --default-mon-cluster-log-to-file=false - --mon-host=USD(ROOK_CEPH_MON_HOST) - --mon-initial-members=USD(ROOK_CEPH_MON_INITIAL_MEMBERS) - --id=b - --setuser=ceph - --setgroup=ceph - --foreground - --public-addr=10.100.13.242 - --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db - --public-bind-addr=USD(ROOK_POD_IP) command: - ceph-mon [...]", "ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP", "oc -n openshift-storage patch deployment rook-ceph-mon-b --type='json' -p '[{\"op\":\"remove\", \"path\":\"/spec/template/spec/containers/0/livenessProbe\"}]' oc -n openshift-storage patch deployment rook-ceph-mon-b -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"mon\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}'", "oc -n openshift-storage exec -it <mon-pod> bash", "monmap_path=/tmp/monmap", "ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP --extract-monmap=USD{monmap_path}", "monmaptool --print /tmp/monmap", "monmaptool USD{monmap_path} --rm <bad_mon>", "monmaptool USD{monmap_path} --rm a monmaptool USD{monmap_path} --rm c", "ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP --inject-monmap=USD{monmap_path}", "oc -n openshift-storage edit configmap rook-ceph-mon-endpoints", "data: a=10.100.35.200:6789;b=10.100.13.242:6789;c=10.100.35.12:6789", "data: b=10.100.13.242:6789", "good_mon_id=b", "mon_host=USD(oc -n openshift-storage get svc rook-ceph-mon-b -o jsonpath='{.spec.clusterIP}') oc -n openshift-storage patch secret rook-ceph-config -p '{\"stringData\": {\"mon_host\": \"[v2:'\"USD{mon_host}\"':3300,v1:'\"USD{mon_host}\"':6789]\", \"mon_initial_members\": \"'\"USD{good_mon_id}\"'\"}}'", "oc replace --force -f rook-ceph-mon-b-deployment.yaml", "oc delete deploy <rook-ceph-mon-1> oc delete deploy <rook-ceph-mon-2>", "oc -n openshift-storage scale deployment rook-ceph-operator --replicas=1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/troubleshooting_openshift_data_foundation/restoring-ceph-monitor-quorum-in-openshift-data-foundation_rhodf
Using the AMQ Spring Boot Starter
Using the AMQ Spring Boot Starter AMQ Spring Boot Starter 3.0 Developing a Spring Boot application for JMS (jakarta)
null
https://docs.redhat.com/en/documentation/amq_spring_boot_starter/3.0/html/using_the_amq_spring_boot_starter/index
Chapter 2. Topic configuration properties
Chapter 2. Topic configuration properties cleanup.policy Type: list Default: delete Valid Values: [compact, delete] Server Default Property: log.cleanup.policy Importance: medium This config designates the retention policy to use on log segments. The "delete" policy (which is the default) will discard old segments when their retention time or size limit has been reached. The "compact" policy will enable log compaction , which retains the latest value for each key. It is also possible to specify both policies in a comma-separated list (e.g. "delete,compact"). In this case, old segments will be discarded per the retention time and size configuration, while retained segments will be compacted. compression.gzip.level Type: int Default: -1 Valid Values: [1,... ,9] or -1 Server Default Property: compression.gzip.level Importance: medium The compression level to use if compression.type is set to gzip . compression.lz4.level Type: int Default: 9 Valid Values: [1,... ,17] Server Default Property: compression.lz4.level Importance: medium The compression level to use if compression.type is set to lz4 . compression.type Type: string Default: producer Valid Values: [uncompressed, zstd, lz4, snappy, gzip, producer] Server Default Property: compression.type Importance: medium Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer. compression.zstd.level Type: int Default: 3 Valid Values: [-131072,... ,22] Server Default Property: compression.zstd.level Importance: medium The compression level to use if compression.type is set to zstd . delete.retention.ms Type: long Default: 86400000 (1 day) Valid Values: [0,... ] Server Default Property: log.cleaner.delete.retention.ms Importance: medium The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan). file.delete.delay.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Server Default Property: log.segment.delete.delay.ms Importance: medium The time to wait before deleting a file from the filesystem. flush.messages Type: long Default: 9223372036854775807 Valid Values: [1,... ] Server Default Property: log.flush.interval.messages Importance: medium This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see the per-topic configuration section ). flush.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.flush.interval.ms Importance: medium This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. follower.replication.throttled.replicas Type: list Default: "" Valid Values: [partitionId]:[brokerId],[partitionId]:[brokerId],... Server Default Property: null Importance: medium A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic. index.interval.bytes Type: int Default: 4096 (4 kibibytes) Valid Values: [0,... ] Server Default Property: log.index.interval.bytes Importance: medium This setting controls how frequently Kafka adds an index entry to its offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this. leader.replication.throttled.replicas Type: list Default: "" Valid Values: [partitionId]:[brokerId],[partitionId]:[brokerId],... Server Default Property: null Importance: medium A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic. local.retention.bytes Type: long Default: -2 Valid Values: [-2,... ] Server Default Property: log.local.retention.bytes Importance: medium The maximum size of local log segments that can grow for a partition before it deletes the old segments. Default value is -2, it represents retention.bytes value to be used. The effective value should always be less than or equal to retention.bytes value. local.retention.ms Type: long Default: -2 Valid Values: [-2,... ] Server Default Property: log.local.retention.ms Importance: medium The number of milliseconds to keep the local log segment before it gets deleted. Default value is -2, it represents retention.ms value is to be used. The effective value should always be less than or equal to retention.ms value. max.compaction.lag.ms Type: long Default: 9223372036854775807 Valid Values: [1,... ] Server Default Property: log.cleaner.max.compaction.lag.ms Importance: medium The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted. max.message.bytes Type: int Default: 1048588 Valid Values: [0,... ] Server Default Property: message.max.bytes Importance: medium The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case. message.format.version Type: string Default: 3.0-IV1 Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3.7-IV1, 3.7-IV2, 3.7-IV3, 3.7-IV4, 3.8-IV0, 3.9-IV0] Server Default Property: log.message.format.version Importance: medium [DEPRECATED] Specify the message format version the broker will use to append messages to the logs. The value of this config is always assumed to be 3.0 if inter.broker.protocol.version is 3.0 or higher (the actual config value is ignored). Otherwise, the value should be a valid ApiVersion. Some examples are: 0.10.0, 1.1, 2.8, 3.0. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand. message.timestamp.after.max.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.message.timestamp.after.max.ms Importance: medium This configuration sets the allowable timestamp difference between the message timestamp and the broker's timestamp. The message timestamp can be later than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If message.timestamp.type=CreateTime, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if message.timestamp.type=LogAppendTime. message.timestamp.before.max.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.message.timestamp.before.max.ms Importance: medium This configuration sets the allowable timestamp difference between the broker's timestamp and the message timestamp. The message timestamp can be earlier than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If message.timestamp.type=CreateTime, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if message.timestamp.type=LogAppendTime. message.timestamp.difference.max.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.message.timestamp.difference.max.ms Importance: medium [DEPRECATED] The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if message.timestamp.type=LogAppendTime. message.timestamp.type Type: string Default: CreateTime Valid Values: [CreateTime, LogAppendTime] Server Default Property: log.message.timestamp.type Importance: medium Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime . min.cleanable.dirty.ratio Type: double Default: 0.5 Valid Values: [0,... ,1] Server Default Property: log.cleaner.min.cleanable.ratio Importance: medium This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log. If the max.compaction.lag.ms or the min.compaction.lag.ms configurations are also specified, then the log compactor considers the log to be eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the max.compaction.lag.ms period. min.compaction.lag.ms Type: long Default: 0 Valid Values: [0,... ] Server Default Property: log.cleaner.min.compaction.lag.ms Importance: medium The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. min.insync.replicas Type: int Default: 1 Valid Values: [1,... ] Server Default Property: min.insync.replicas Importance: medium When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write. preallocate Type: boolean Default: false Server Default Property: log.preallocate Importance: medium True if we should preallocate the file on disk when creating a new log segment. remote.storage.enable Type: boolean Default: false Server Default Property: null Importance: medium To enable tiered storage for a topic, set this configuration as true. You can not disable this config once it is enabled. It will be provided in future versions. retention.bytes Type: long Default: -1 Server Default Property: log.retention.bytes Importance: medium This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes. Additionally, retention.bytes configuration operates independently of "segment.ms" and "segment.bytes" configurations. Moreover, it triggers the rolling of new segment if the retention.bytes is configured to zero. retention.ms Type: long Default: 604800000 (7 days) Valid Values: [-1,... ] Server Default Property: log.retention.ms Importance: medium This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data. If set to -1, no time limit is applied. Additionally, retention.ms configuration operates independently of "segment.ms" and "segment.bytes" configurations. Moreover, it triggers the rolling of new segment if the retention.ms condition is satisfied. segment.bytes Type: int Default: 1073741824 (1 gibibyte) Valid Values: [14,... ] Server Default Property: log.segment.bytes Importance: medium This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention. segment.index.bytes Type: int Default: 10485760 (10 mebibytes) Valid Values: [4,... ] Server Default Property: log.index.size.max.bytes Importance: medium This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting. segment.jitter.ms Type: long Default: 0 Valid Values: [0,... ] Server Default Property: log.roll.jitter.ms Importance: medium The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling. segment.ms Type: long Default: 604800000 (7 days) Valid Values: [1,... ] Server Default Property: log.roll.ms Importance: medium This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data. unclean.leader.election.enable Type: boolean Default: false Server Default Property: unclean.leader.election.enable Importance: medium Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. message.downconversion.enable Type: boolean Default: true Server Default Property: log.message.downconversion.enable Importance: low This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false , broker will not perform down-conversion for consumers expecting an older message format. The broker responds with UNSUPPORTED_VERSION error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/kafka_configuration_properties/topic-configuration-properties-str
30.3. Starting the tftp Server
30.3. Starting the tftp Server On the DHCP server, verify that the tftp-server package is installed with the command rpm -q tftp-server . tftp is an xinetd-based service; start it with the following commands: These commands configure the tftp and xinetd services to start at boot time in runlevels 3, 4, and 5.
[ "/sbin/chkconfig --level 345 xinetd on /sbin/chkconfig --level 345 tftp on" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch30s03
Chapter 2. Topic configuration properties
Chapter 2. Topic configuration properties cleanup.policy Type: list Default: delete Valid Values: [compact, delete] Server Default Property: log.cleanup.policy Importance: medium This config designates the retention policy to use on log segments. The "delete" policy (which is the default) will discard old segments when their retention time or size limit has been reached. The "compact" policy will enable log compaction , which retains the latest value for each key. It is also possible to specify both policies in a comma-separated list (e.g. "delete,compact"). In this case, old segments will be discarded per the retention time and size configuration, while retained segments will be compacted. compression.type Type: string Default: producer Valid Values: [uncompressed, zstd, lz4, snappy, gzip, producer] Server Default Property: compression.type Importance: medium Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer. delete.retention.ms Type: long Default: 86400000 (1 day) Valid Values: [0,... ] Server Default Property: log.cleaner.delete.retention.ms Importance: medium The amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan). file.delete.delay.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Server Default Property: log.segment.delete.delay.ms Importance: medium The time to wait before deleting a file from the filesystem. flush.messages Type: long Default: 9223372036854775807 Valid Values: [1,... ] Server Default Property: log.flush.interval.messages Importance: medium This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see the per-topic configuration section ). flush.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.flush.interval.ms Importance: medium This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. follower.replication.throttled.replicas Type: list Default: "" Valid Values: [partitionId]:[brokerId],[partitionId]:[brokerId],... Server Default Property: null Importance: medium A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic. index.interval.bytes Type: int Default: 4096 (4 kibibytes) Valid Values: [0,... ] Server Default Property: log.index.interval.bytes Importance: medium This setting controls how frequently Kafka adds an index entry to its offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this. leader.replication.throttled.replicas Type: list Default: "" Valid Values: [partitionId]:[brokerId],[partitionId]:[brokerId],... Server Default Property: null Importance: medium A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic. local.retention.bytes Type: long Default: -2 Valid Values: [-2,... ] Server Default Property: log.local.retention.bytes Importance: medium The maximum size of local log segments that can grow for a partition before it deletes the old segments. Default value is -2, it represents retention.bytes value to be used. The effective value should always be less than or equal to retention.bytes value. local.retention.ms Type: long Default: -2 Valid Values: [-2,... ] Server Default Property: log.local.retention.ms Importance: medium The number of milliseconds to keep the local log segment before it gets deleted. Default value is -2, it represents retention.ms value is to be used. The effective value should always be less than or equal to retention.ms value. max.compaction.lag.ms Type: long Default: 9223372036854775807 Valid Values: [1,... ] Server Default Property: log.cleaner.max.compaction.lag.ms Importance: medium The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted. max.message.bytes Type: int Default: 1048588 Valid Values: [0,... ] Server Default Property: message.max.bytes Importance: medium The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case. message.format.version Type: string Default: 3.0-IV1 Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0, 3.2-IV0, 3.3-IV0, 3.3-IV1, 3.3-IV2, 3.3-IV3, 3.4-IV0, 3.5-IV0, 3.5-IV1, 3.5-IV2, 3.6-IV0, 3.6-IV1, 3.6-IV2, 3.7-IV0, 3.7-IV1, 3.7-IV2, 3.7-IV3, 3.7-IV4, 3.8-IV0] Server Default Property: log.message.format.version Importance: medium [DEPRECATED] Specify the message format version the broker will use to append messages to the logs. The value of this config is always assumed to be 3.0 if inter.broker.protocol.version is 3.0 or higher (the actual config value is ignored). Otherwise, the value should be a valid ApiVersion. Some examples are: 0.10.0, 1.1, 2.8, 3.0. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand. message.timestamp.after.max.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.message.timestamp.after.max.ms Importance: medium This configuration sets the allowable timestamp difference between the message timestamp and the broker's timestamp. The message timestamp can be later than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If message.timestamp.type=CreateTime, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if message.timestamp.type=LogAppendTime. message.timestamp.before.max.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.message.timestamp.before.max.ms Importance: medium This configuration sets the allowable timestamp difference between the broker's timestamp and the message timestamp. The message timestamp can be earlier than or equal to the broker's timestamp, with the maximum allowable difference determined by the value set in this configuration. If message.timestamp.type=CreateTime, the message will be rejected if the difference in timestamps exceeds this specified threshold. This configuration is ignored if message.timestamp.type=LogAppendTime. message.timestamp.difference.max.ms Type: long Default: 9223372036854775807 Valid Values: [0,... ] Server Default Property: log.message.timestamp.difference.max.ms Importance: medium [DEPRECATED] The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if message.timestamp.type=LogAppendTime. message.timestamp.type Type: string Default: CreateTime Valid Values: [CreateTime, LogAppendTime] Server Default Property: log.message.timestamp.type Importance: medium Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime . min.cleanable.dirty.ratio Type: double Default: 0.5 Valid Values: [0,... ,1] Server Default Property: log.cleaner.min.cleanable.ratio Importance: medium This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log. If the max.compaction.lag.ms or the min.compaction.lag.ms configurations are also specified, then the log compactor considers the log to be eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the max.compaction.lag.ms period. min.compaction.lag.ms Type: long Default: 0 Valid Values: [0,... ] Server Default Property: log.cleaner.min.compaction.lag.ms Importance: medium The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. min.insync.replicas Type: int Default: 1 Valid Values: [1,... ] Server Default Property: min.insync.replicas Importance: medium When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write. preallocate Type: boolean Default: false Server Default Property: log.preallocate Importance: medium True if we should preallocate the file on disk when creating a new log segment. remote.storage.enable Type: boolean Default: false Server Default Property: null Importance: medium To enable tiered storage for a topic, set this configuration as true. You can not disable this config once it is enabled. It will be provided in future versions. retention.bytes Type: long Default: -1 Server Default Property: log.retention.bytes Importance: medium This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes. retention.ms Type: long Default: 604800000 (7 days) Valid Values: [-1,... ] Server Default Property: log.retention.ms Importance: medium This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data. If set to -1, no time limit is applied. segment.bytes Type: int Default: 1073741824 (1 gibibyte) Valid Values: [14,... ] Server Default Property: log.segment.bytes Importance: medium This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention. segment.index.bytes Type: int Default: 10485760 (10 mebibytes) Valid Values: [4,... ] Server Default Property: log.index.size.max.bytes Importance: medium This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting. segment.jitter.ms Type: long Default: 0 Valid Values: [0,... ] Server Default Property: log.roll.jitter.ms Importance: medium The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling. segment.ms Type: long Default: 604800000 (7 days) Valid Values: [1,... ] Server Default Property: log.roll.ms Importance: medium This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data. unclean.leader.election.enable Type: boolean Default: false Server Default Property: unclean.leader.election.enable Importance: medium Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. message.downconversion.enable Type: boolean Default: true Server Default Property: log.message.downconversion.enable Importance: low This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false , broker will not perform down-conversion for consumers expecting an older message format. The broker responds with UNSUPPORTED_VERSION error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/kafka_configuration_properties/topic-configuration-properties-str
11.6. Consistent Network Device Naming Using biosdevname
11.6. Consistent Network Device Naming Using biosdevname This feature, implemented through the biosdevname udev helper utility, will change the name of all embedded network interfaces, PCI card network interfaces, and virtual function network interfaces from the existing eth[0123...] to the new naming convention as shown in Table 11.2, "The biosdevname Naming Convention" . Note that unless the system is a Dell system, or biosdevname is explicitly enabled as described in Section 11.6.2, "Enabling and Disabling the Feature" , the systemd naming scheme will take precedence. Table 11.2. The biosdevname Naming Convention Device Old Name New Name Embedded network interface (LOM) eth[0123...] em[1234...] [a] PCI card network interface eth[0123...] p< slot >p< ethernet port > [b] Virtual function eth[0123...] p< slot >p< ethernet port >_< virtual interface > [c] [a] New enumeration starts at 1 . [b] For example: p3p4 [c] For example: p3p4_1 11.6.1. System Requirements The biosdevname program uses information from the system's BIOS, specifically the type 9 (System Slot) and type 41 (Onboard Devices Extended Information) fields contained within the SMBIOS. If the system's BIOS does not have SMBIOS version 2.6 or higher and this data, the new naming convention will not be used. Most older hardware does not support this feature because of a lack of BIOSes with the correct SMBIOS version and field information. For BIOS or SMBIOS version information, contact your hardware vendor. For this feature to take effect, the biosdevname package must also be installed. To install it, issue the following command as root : 11.6.2. Enabling and Disabling the Feature To disable this feature, pass the following option on the boot command line, both during and after installation: To enable this feature, pass the following option on the boot command line, both during and after installation: Unless the system meets the minimum requirements, this option will be ignored and the system will use the systemd naming scheme as described in the beginning of the chapter. If the biosdevname install option is specified, it must remain as a boot option for the lifetime of the system.
[ "~]# yum install biosdevname", "biosdevname=0", "biosdevname=1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-consistent_network_device_naming_using_biosdevname
Administration guide
Administration guide Red Hat OpenShift Dev Spaces 3.14 Administering Red Hat OpenShift Dev Spaces 3.14 Jana Vrbkova [email protected] Red Hat Developer Group Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.14/html/administration_guide/index
Chapter 1. OAuth APIs
Chapter 1. OAuth APIs 1.1. OAuthAccessToken [oauth.openshift.io/v1] Description OAuthAccessToken describes an OAuth access token. The name of a token must be prefixed with a sha256~ string, must not contain "/" or "%" characters and must be at least 32 characters long. The name of the token is constructed from the actual token by sha256-hashing it and using URL-safe unpadded base64-encoding (as described in RFC4648) on the hashed result. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. OAuthAuthorizeToken [oauth.openshift.io/v1] Description OAuthAuthorizeToken describes an OAuth authorization token Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. OAuthClientAuthorization [oauth.openshift.io/v1] Description OAuthClientAuthorization describes an authorization created by an OAuth client Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. OAuthClient [oauth.openshift.io/v1] Description OAuthClient describes an OAuth client Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. UserOAuthAccessToken [oauth.openshift.io/v1] Description UserOAuthAccessToken is a virtual resource to mirror OAuthAccessTokens to the user the access token was issued for Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/oauth_apis/oauth-apis
Red Hat Ansible Automation Platform Service on AWS
Red Hat Ansible Automation Platform Service on AWS Ansible on Clouds 2.x Install and configure Red Hat Ansible Automation Platform Service on AWS Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/ansible_on_clouds/2.x/html/red_hat_ansible_automation_platform_service_on_aws/index
Chapter 5. PodSecurityPolicySelfSubjectReview [security.openshift.io/v1]
Chapter 5. PodSecurityPolicySelfSubjectReview [security.openshift.io/v1] Description PodSecurityPolicySelfSubjectReview checks whether this user/SA tuple can create the PodTemplateSpec Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds spec object PodSecurityPolicySelfSubjectReviewSpec contains specification for PodSecurityPolicySelfSubjectReview. status object PodSecurityPolicySubjectReviewStatus contains information/status for PodSecurityPolicySubjectReview. 5.1.1. .spec Description PodSecurityPolicySelfSubjectReviewSpec contains specification for PodSecurityPolicySelfSubjectReview. Type object Required template Property Type Description template PodTemplateSpec template is the PodTemplateSpec to check. 5.1.2. .status Description PodSecurityPolicySubjectReviewStatus contains information/status for PodSecurityPolicySubjectReview. Type object Property Type Description allowedBy ObjectReference allowedBy is a reference to the rule that allows the PodTemplateSpec. A rule can be a SecurityContextConstraint or a PodSecurityPolicy A nil , indicates that it was denied. reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. template PodTemplateSpec template is the PodTemplateSpec after the defaulting is applied. 5.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicyselfsubjectreviews POST : create a PodSecurityPolicySelfSubjectReview 5.2.1. /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicyselfsubjectreviews Table 5.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a PodSecurityPolicySelfSubjectReview Table 5.2. Body parameters Parameter Type Description body PodSecurityPolicySelfSubjectReview schema Table 5.3. HTTP responses HTTP code Reponse body 200 - OK PodSecurityPolicySelfSubjectReview schema 201 - Created PodSecurityPolicySelfSubjectReview schema 202 - Accepted PodSecurityPolicySelfSubjectReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_apis/podsecuritypolicyselfsubjectreview-security-openshift-io-v1
Chapter 3. Signing Container Images
Chapter 3. Signing Container Images Signing container images on RHEL systems provides a means of validating where a container image came from, checking that the image has not been tampered with, and setting policies to determine which validated images you will allow to be pulled to your systems. Before you begin, there are a few things you should know about Red Hat container image signing: Docker version : The features described here require at least Docker 1.12.3. So you can use the docker package for any RHEL and RHEL Atomic release after 7.3.2. Red Hat Signed Images : As of RHEL and RHEL Atomic 7.4, image signing is fully supported (no longer tech preview). With RHEL 7.4, Red Hat has also begun signing its own container images. So you can use the instructions provided in this chapter to determine the authenticity of those images using Red Hat GPG keys. This chapter describes tools and procedures you can use on Red Hat systems to not only sign images, but also consume images that have been signed in these ways: Creating Image Signatures : By signing images with a private key and sharing a public key generated from it, others can use the public key to authenticate the images you share. The signatures needed to validate the images can be made available from an accessible location (like a Web server) in what is referred to as a "signature store" or made available from a directory on the local filesystem. The actual signature can be created from an image stored in a registry or at the time the image is pushed to a container registry. Verifying Signed Images : You can check a signed image when it is pulled from a registry. This includes verifying Red Hat's signed images. Trusting Images : Besides determining that an image is valid, you can also set policies that say which valid images you trust to use on your system, as well as which registries you trust to use without validation. For the current release Red Hat Enterprise Linux and RHEL Atomic Host, there are a limited number of tools and container registries that support image signing. Over time, however, you can expect most features on RHEL systems that pull or store images to support signing. To get you started in the mean time, however, you can use the following features in RHEL: Registries : Currently, you can use a local container registry (docker-distribution package) and the Docker Hub (docker.io) from RHEL systems to push and pull signed images. For the time being, image signing features are only supported in v2 (not v1) Docker Registries. Image signing tools : To create image signatures, you can use atomic sign (to create a signature from a stored image) or atomic push (to create an image signature as you push it to a registry). Image verifying tools : To verify a signed image, you can use the atomic trust command to identify which image registries you trust without verification and which registries require verification. Later, when you pull an image, the atomic pull or docker pull command will validate the image as it is pulled to the local system. Operating systems : The image signing and verification features described here are supported in Red Hat Enterprise Linux Server and RHEL Atomic Host systems, version 7.3.1 and later. For a more complete description of Red Hat container image signing, see: Container Image Signing Integration Guide 3.1. Getting Container Signing Software If you are using a RHEL Atomic Host 7.3.2 system (or later), the tools you need to sign, trust and verify images are already included. Most container-related software for RHEL server is in the rhel-7-server-extras-rpms yum repository. So, on your RHEL 7.3 server system, you should enable that repository, install packages, and start the docker service as follows: Before you start the docker service, you must enable signature verification. To do that edit the /etc/sysconfig/docker file and change --signature-verification=false to --signature-verification=true on the OPTIONS line. It should appear as follows: Then, enable and start the docker service: The docker service should now be running and ready for you to use. 3.2. Creating Image Signatures Image signing in this section is broken down into the following steps: GPG keys : Create GPG keys for signing images. Sign images : Choose from either creating a signature from an image already in a container registry or creating a signature as you push it to a container registry. 3.2.1. Create GPG Keys To sign container images on Red Hat systems, you need to have a private GPG key and a public key you create from it. If you don't already have GPG keys you want to use, you can create them with the gpg2 --gen-key command. This procedure was done from a terminal window on a GNOME desktop as the user who will sign the images: Create private key : Use the following command to interactively add information needed to create the private key. You can use defaults for most prompts, although you should properly identify your user name, email address, and add a comment. You also must add a passphrase when prompted. You will need the passphrase later, when you attempt to sign an image. Anyone with access to your private key and passphrase will be able to identify content as belonging to you. Create entropy : You need to generate activity on your system to help produce random data that will be used to produce your key. This can be done by moving windows around or opening another Terminal window and running a variety of commands, such as ones that write to disk or read lots of data. Once enough entropy is generated, the gpg2 command will exit and show information about the key just created. Create public key : Here's an example of creating a public key from that private key: List keys : Use the following command to list your keys. At this point, the private key you created is available to use from this user account to sign images and the public key ( mysignkey.gpg ) can be shared with others for them to use to validate your images. Keep the private key secure. If someone else has that key, they could use it to sign files as though those files belonged to you. 3.2.2. Creating an Image Signature With your private key in place, you can begin preparing to sign your images. The steps below show two different ways to sign images: From a repository : You can create a signature for an image that is already in a repository using atomic sign . Image at push time : You can tag a local image and create an image signature as you push it to a registry using atomic push . Image signing requires super user privileges to run the atomic and docker commands. However, when you sign, you probably want to use your own keys. To take that into account, when you run the atomic command with sudo , it will read keys from your regular user account's home directory (not the root user's directory) to do the signing. 3.3. Set up to do Image Signing If you are going to sign a lot of images on a personal system, you can identify signing information in your /etc/atomic.conf file. Once you add that information to atomic.conf , the atomic command assumes that you want to use that information to sign any image you push or sign. For example, for a user account jjsmith with a default signer of [email protected] , you could add the following lines to the /etc/atomic.conf file so that all images you push or sign would be signed with that information by default: If you want to use a different signer or signing home directory, to override those default values, you can do that later on the atomic command line using the --sign-by and --gnupghome options, respectively. For example, to have [email protected] and /home/jjsmith/.gnupg used as the signer and default gnupg directory, type the following on the atomic command line: 3.4. Creating a Signature for an Image in a Repository You can create an image signature for an image that is already pushed to a registry using the atomic sign command. Use docker search to find the image, then atomic sign to create a signature for that image. IMPORTANT : The image signature is created using the exact name you enter on the atomic sign command line. When someone verifies that image against the signature later, they must use the exact same name or the image will not be verified. Find image : Find the image for which you want to create the signature using the docker search command: Create the image signature : Choose the image you want to sign (jjsmith/mybusybox in this example). To sign it with the default signer and home directory entered in /etc/atomic.conf , type the following: When you are prompted for a passphrase, enter the passphrase you entered when you created your private key. As noted in the output, you can see that the signature was created and stored in the /var/lib/atomic/sigstore directory on your local system under the registry name, user name, and image name ( docker.io/jjsmith/mybusybox*sha256:... ). 3.5. Creating an Image Signature at Push Time To create an image signature for an image at the time you push it, you can tag it with the identity of the registry and possibly the username you want to be associated with the image. This shows an example of creating an image signature at the point that you push it to the Docker Hub (docker.io). In this case, the procedure relies on the default signer and GNUPG home directory assigned earlier in the /etc/atomic.conf file. Tag image : Using the image ID of the image, tag it with the identity of the registry to which you want to push it. Push and sign the image : The following command creates the signature as the image is pushed to docker.io: When prompted, enter the passphrase you assigned when you created your private key. At this point, the image should be available from the repository and ready to pull. 3.6. Sharing the Signature Store The signatures you just created are stored in the /var/lib/atomic/sigstore directory. For the purposes of trying out signatures, you can just use that signature store from the local system. However, when you begin sharing signed images with others and have them validate those images, you would typically share that signature store directory structure from a Web server or other centrally available location. You would also need to share the public key associated with the private key you used to create the signatures. For this procedure, you could just copy your public key to the /etc/pki/containers directory and use the signature store from the local /var/lib/atomic/sigstore directory. For example: As for the location of the signature store, you can assign that location when you run an atomic trust add command (shown later). Or you can edit the /etc/containers/registries.d/default.yaml file directly and identify a value for the sigstore setting (such as, sigstore: file:///var/lib/atomic/sigstore ). 3.7. Validating and Trusting Signed Images Using the atomic trust command, you can identify policies that determine which registries you trust to allow container images to be pulled to your system. To further refine the images you accept, you can set a trust value to accept all images from a registry or accept only signed images from a registry. As part of accepting signed images, you can also identify the location of the keys to use to validate the images. The following procedure describes how to show and change trust policies related to pulling images to your system with the atomic command. Check trust values : Run this command to see the current trust value for pulling container images with the atomic command: When you start out, the trust default allows any request to pull an image to be accepted. Set default to reject : Having the default be reject might be harsh if you are just trying out containers, but could be considered when you are ready to lock down which registries to allow. So you could leave the default as accept, if you like. To limit pulled images to only accept images from certain registries, you can start by changing the default to reject as follows: You can see that the default is now to reject all requests to pull images, so an attempt to pull a container image fails. Add trusted registry without signatures : This step shows how to allow to your system to pull images from the docker.io registry without requiring signature checking. You could repeat this step for every registry you want to allow (including your own local registries) for which you don't want to require signatures. Type the following to allow docker.io images to be pulled, without signature verification: Notice that you were able to pull the centos image from docker.io after adding that trust line. Add trusted registry with signatures : This example identifies the Red Hat Registry (registry.access.redhat.com) as being a registry from which the local system will be able to pull signed images: Pull and verify image : At this point, you can verify the trust settings. Run the atomic trust show command, which shows that only signed images from registry.access.redhat.com and any image from docker.io will be accepted. All other images will be rejected. The trust policies you just set are stored in the /etc/containers/policy.json file. See the Reasonable Locked-Down System example policy.json file for an good, working example of this file. You can add your own policy files to the /etc/containers/registries.d directory. You can name those files anything you like, as long as .yaml is at the end. 3.8. Validating Signed Images from Red Hat Red Hat signs its container images using its own GPG keys. Using the same public keys Red Hat uses to validate RPM packages, you can validate signed Red Hat container images. The following procedure describes the process of validating signed images from Red Hat. Refer to the following articles related to validating Red Hat container images: Verifying image signing for Red Hat Container Registry : Describes how to use the atomic command to indicate that you trust images from the Red Hat Registry and check the signatures of images you pull from that registry using the docker service. Image Signatures : Describes how to use image signatures with OpenShift commands and the OpenShift Registry. Follow these steps to identify how to trust the Red Hat Registry and validate the signatures of images you get from that registry: Add the Red Hat container registry (registry.access.redhat.com) as a trusted registry. Identify the location of the signature store (--sigstore) that contains the signature for each signed image in the registry and the public keys (--pubkeys) used to validate those signatures against the images. Pull a container from the trusted registry. If the signature is valid, the image should be pulled with the output looking similar to the following: If you had identified the wrong signature directory or no signature was found there for the image you wanted to pull, you would see output that looks as follows: 3.9. Understanding Image Signing Configuration Files The image signing process illustrated in this chapter resulted in several configuration files being created or modified. Instead using the atomic command to create and modify those files, you could edit those files directly. Here are examples of the files created when you run commands to trust registries. 3.9.1. policy.json file The atomic trust command modifies settings in the /etc/containers/policy.json file. Here is the content of that file that resulted from changing the default trust policy to reject, accepting all requests to pull images from the registry.access.redhat.com registry without verifying signatures, and accepting requests to pull images from the jjones account in the docker.io registry that are signed by GPGKeys in the /etc/pki/containers/mysignkey.gpg file: 3.9.2. whatever.yaml Settings added from the atomic trust add command line when adding trust settings for a registry are stored in a new file that is created in the /etc/containers/registries.d/ directory. The file name includes the registry's name and ends in the .yaml suffix. For example, if you were adding trust settings for the user jjones at docker.io ( docker.io/jjones ), the file that stores the settings is /etc/containers/registries.d/docker.io-jjones.yaml . The command line could include the location of the signature store: The settings in the docker.io-jjones.yaml file override default setting on your system. Default signing settings are stored in the /etc/containers/registries.d/default.yaml file. For more information of the format of the registries.d files, see Registries Configuration Directory .
[ "subscription-manager repos --enable=rhel-7-server-extras-rpms yum install skopeo docker atomic", "OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=true'", "systemctl enable docker; systemctl start docker", "gpg2 --gen-key Please select what kind of key you want: Your selection? 1 What keysize do you want? (2048) 2048 Please specify how long the key should be valid. 0 = key does not expire Key is valid for? (0) 0 Key does not expire at all Is this correct? (y/N) y Real name: Joe Smith Email address: [email protected] Comment: Image Signing Key You selected this USER-ID: \"Joe Smith (Image Signing Key) <[email protected]>\" Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key. gpg: /home/jsmith/.gnupg/trustdb.gpg: trustdb created gpg: key D3F46FFF marked as ultimately trusted public and secret key created and signed.", "gpg2 --armor --export --output mysignkey.gpg [email protected]", "gpg2 --list-keys /home/jsmith/.gnupg/pubring.gpg ------------------------------- pub 2048R/D3F46FFF 2016-10-20 uid Joe Smith (Image Signing Key) <[email protected]> sub 2048R/775A4344 2016-10-20", "default_signer: [email protected] gnupg_homedir: /home/jjsmith/.gnupg", "sudo atomic push --sign-by [email protected] --gnupghome /home/jjsmith/.gnupg docker.io/wherever/whatever", "sudo docker search docker.io/jjsmith/mybusybox INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED docker.io docker.io/jjsmith/mybusybox 0.", "sudo atomic sign docker.io/jjsmith/mybusybox Created: /var/lib/atomic/sigstore/docker.io/jjsmith/mybusybox@sha256:9393222c6789842b16bcf7306b6eb4b486d81a48d3b8b8f206589b5d1d5a6101/signature-1", "sudo docker tag hangman docker.io/jjsmith/hangman:latest", "sudo atomic push -t docker docker.io/jjsmith/hangman:latest Registry Username: jjsmith Registry Password: ***** Copying blob sha256:5f70bf18 Signing manifest Writing manifest to image destination Storing signatures", "sudo mkdir /etc/pki/containers sudo cp mysignkey.gpg /etc/pki/containers/", "sudo atomic trust show * (default) accept", "sudo atomic trust default reject sudo atomic trust show * (default) reject sudo atomic pull docker.io/centos Pulling docker.io/library/centos:latest FATA[0000] Source image rejected: Running image docker://centos:latest is rejected by policy.", "sudo atomic trust add docker.io --type insecureAcceptAnything sudo atomic pull docker.io/centos Pulling docker.io/library/centos:latest Copying blob USD", "sudo atomic trust add registry.access.redhat.com --pubkeys /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release --sigstore https://access.redhat.com/webassets/docker/content/sigstore --type signedBy", "sudo atomic trust show * (default) reject docker.io accept registry.access.redhat.com signed [email protected],[email protected]", "sudo atomic trust add --pubkeys /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release --sigstore https://access.redhat.com/webassets/docker/content/sigstore registry.access.redhat.com", "sudo atomic pull rhel7/etcd Pulling registry.access.redhat.com/rhel7/etcd:latest Copying blob Writing manifest to image destination Storing signatures", "FATA[0004] Source image rejected: A signature was required, but no signature exists", "\"default\": [ { \"type\": \"reject\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"insecureAcceptAnything\" } ], \"docker.io/jjones\": [ { \"keyType\": \"GPGKeys\", \"type\": \"signedBy\", \"keyPath\": \"/etc/pki/containers/mysignkey.gpg\" } ] } } }", "docker: docker.io/jjones: sigstore: file:///var/lib/atomic/sigstore/" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/managing_containers/signing_container_images
probe::stap.pass2
probe::stap.pass2 Name probe::stap.pass2 - Starting stap pass2 (elaboration) Synopsis stap.pass2 Values session the systemtap_session variable s Description pass2 fires just after the call to gettimeofday , just before the call to semantic_pass.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-stap-pass2
Chapter 7. Comparing Systemd services to containerized services
Chapter 7. Comparing Systemd services to containerized services This chapter provides reference material to show how containerized services differ from Systemd services. 7.1. Systemd services and containerized services The following table shows the correlation between Systemd-based services and the podman containers controlled with the Systemd services. Component Systemd service Containers OpenStack Image Storage (glance) tripleo_glance_api.service glance_api HAProxy tripleo_haproxy.service haproxy OpenStack Orchestration (heat) tripleo_heat_api.service tripleo_heat_api_cfn.service tripleo_heat_api_cron.service tripleo_heat_engine.service heat_api heat_api_cfn heat_api_cron heat_engine OpenStack Bare Metal (ironic) tripleo_ironic_api.service tripleo_ironic_conductor.service tripleo_ironic_inspector.service tripleo_ironic_inspector_dnsmasq.service tripleo_ironic_neutron_agent.service tripleo_ironic_pxe_http.service tripleo_ironic_pxe_tftp.service tripleo_iscsid.service ironic_api ironic_conductor ironic_inspector ironic_inspector_dnsmasq ironic_neutron_agent ironic_pxe_http ironic_pxe_tftp iscsid Keepalived tripleo_keepalived.service keepalived OpenStack Identity (keystone) tripleo_keystone.service tripleo_keystone_cron.service keystone keystone_cron Logrotate tripleo_logrotate_crond.service logrotate_crond Memcached tripleo_memcached.service memcached MySQL tripleo_mysql.service mysql OpenStack Networking (neutron) tripleo_neutron_api.service tripleo_neutron_dhcp.service tripleo_neutron_l3_agent.service tripleo_neutron_ovs_agent.service neutron_api neutron_dhcp neutron_l3_agent neutron_ovs_agent OpenStack Compute (nova) tripleo_nova_api.service tripleo_nova_api_cron.service tripleo_nova_compute.service tripleo_nova_conductor.service tripleo_nova_metadata.service tripleo_nova_placement.service tripleo_nova_scheduler.service nova_api nova_api_cron nova_compute nova_conductor nova_metadata nova_placement nova_scheduler RabbitMQ tripleo_rabbitmq.service rabbitmq OpenStack Object Storage (swift) tripleo_swift_account_reaper.service tripleo_swift_account_server.service tripleo_swift_container_server.service tripleo_swift_container_updater.service tripleo_swift_object_expirer.service tripleo_swift_object_server.service tripleo_swift_object_updater.service tripleo_swift_proxy.service tripleo_swift_rsync.service swift_account_reaper swift_account_server swift_container_server swift_container_updater swift_object_expirer swift_object_server swift_object_updater swift_proxy swift_rsync OpenStack Messaging (zaqar) tripleo_zaqar.service tripleo_zaqar_websocket.service zaqar zaqar_websocket Aodh tripleo_aodh_api.service tripleo_aodh_evaluator.service tripleo_aodh_api_cron.service tripleo_aodh_listener.service tripleo_aodh_notifier.service aodh_api aodh_listener aodh_evaluator aodh_api_cron aodh_notifier Gnocchi tripleo_gnocchi_api.service tripleo_gnocchi_metricd.service tripleo_gnocchi_statsd.service gnocchi_api gnocchi_metricd gnocchi_statsd Ceilometer tripleo_ceilometer_agent_central.service tripleo_ceilometer_agent_compute.service tripleo_ceilometer_agent_notification.service ceilometer_agent_central ceilometer_agent_compute ceilometer_agent_notification 7.2. Systemd log locations vs containerized log locations The following table shows Systemd-based OpenStack logs and their equivalents for containers. All container-based log locations are available on the physical host and are mounted to the container. OpenStack service Systemd service logs Container logs aodh /var/log/aodh/ /var/log/containers/aodh/ /var/log/containers/httpd/aodh-api/ ceilometer /var/log/ceilometer/ /var/log/containers/ceilometer/ cinder /var/log/cinder/ /var/log/containers/cinder/ /var/log/containers/httpd/cinder-api/ glance /var/log/glance/ /var/log/containers/glance/ gnocchi /var/log/gnocchi/ /var/log/containers/gnocchi/ /var/log/containers/httpd/gnocchi-api/ heat /var/log/heat/ /var/log/containers/heat/ /var/log/containers/httpd/heat-api/ /var/log/containers/httpd/heat-api-cfn/ horizon /var/log/horizon/ /var/log/containers/horizon/ /var/log/containers/httpd/horizon/ keystone /var/log/keystone/ /var/log/containers/keystone /var/log/containers/httpd/keystone/ databases /var/log/mariadb/ /var/log/mongodb/ /var/log/mysqld.log /var/log/containers/mysql/ neutron /var/log/neutron/ /var/log/containers/neutron/ /var/log/containers/httpd/neutron-api/ nova /var/log/nova/ /var/log/containers/nova/ /var/log/containers/httpd/nova-api/ /var/log/containers/httpd/placement/ rabbitmq /var/log/rabbitmq/ /var/log/containers/rabbitmq/ redis /var/log/redis/ /var/log/containers/redis/ swift /var/log/swift/ /var/log/containers/swift/ 7.3. Systemd configuration vs containerized configuration The following table shows Systemd-based OpenStack configuration and their equivalents for containers. All container-based configuration locations are available on the physical host, are mounted to the container, and are merged (via kolla ) into the configuration within each respective container. OpenStack service Systemd service configuration Container configuration aodh /etc/aodh/ /var/lib/config-data/puppet-generated/aodh/ ceilometer /etc/ceilometer/ /var/lib/config-data/puppet-generated/ceilometer/etc/ceilometer/ cinder /etc/cinder/ /var/lib/config-data/puppet-generated/cinder/etc/cinder/ glance /etc/glance/ /var/lib/config-data/puppet-generated/glance_api/etc/glance/ gnocchi /etc/gnocchi/ /var/lib/config-data/puppet-generated/gnocchi/etc/gnocchi/ haproxy /etc/haproxy/ /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/ heat /etc/heat/ /var/lib/config-data/puppet-generated/heat/etc/heat/ /var/lib/config-data/puppet-generated/heat_api/etc/heat/ /var/lib/config-data/puppet-generated/heat_api_cfn/etc/heat/ horizon /etc/openstack-dashboard/ /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/ keystone /etc/keystone/ /var/lib/config-data/puppet-generated/keystone/etc/keystone/ databases /etc/my.cnf.d/ /etc/my.cnf /var/lib/config-data/puppet-generated/mysql/etc/my.cnf.d/ neutron /etc/neutron/ /var/lib/config-data/puppet-generated/neutron/etc/neutron/ nova /etc/nova/ /var/lib/config-data/puppet-generated/nova/etc/nova/ /var/lib/config-data/puppet-generated/etc/placement/ rabbitmq /etc/rabbitmq/ /var/lib/config-data/puppet-generated/rabbitmq/etc/rabbitmq/ redis /etc/redis/ /etc/redis.conf /var/lib/config-data/puppet-generated/redis/etc/redis/ /var/lib/config-data/puppet-generated/redis/etc/redis.conf swift /etc/swift/ /var/lib/config-data/puppet-generated/swift/etc/swift/ /var/lib/config-data/puppet-generated/swift_ringbuilder/etc/swift/
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/transitioning_to_containerized_services/assembly_comparing-systemd-services-to-containerized-services
7.3. Issuing CRLs
7.3. Issuing CRLs The Certificate Manager uses its CA signing certificate key to sign CRLs. To use a separate signing key pair for CRLs, set up a CRL signing key and change the Certificate Manager configuration to use this key to sign CRLs. See Section 7.3.4, "Setting a CA to Use a Different Certificate to Sign CRLs" for more information. Set up CRL issuing points. An issuing point is already set up and enabled for a master CRL. Figure 7.1. Default CRL Issuing Point Additional issuing points for the CRLs can be created. See Section 7.3.1, "Configuring Issuing Points" for details. There are five types of CRLs the issuing points can create, depending on the options set when configuring the issuing point to define what the CRL will list: Master CRL contains the list of revoked certificates from the entire CA. ARL is an Authority Revocation List containing only revoked CA certificates. CRL with expired certificates includes revoked certificates that have expired in the CRL. CRL from certificate profiles determines the revoked certificates to include based on the profiles used to create the certificates originally. CRLs by reason code determines the revoked certificates to include based on the revocation reason code. Configure the CRLs for each issuing point. See Section 7.3.2, "Configuring CRLs for Each Issuing Point" for details. Set up the CRL extensions which are configured for the issuing point. See Section 7.3.3, "Setting CRL Extensions" for details. Set up the delta CRL for an issuing point by enabling extensions for that issuing point, DeltaCRLIndicator or CRLNumber . Set up the CRLDistributionPoint extension to include information about the issuing point. Set up publishing CRLs to files, an LDAP directory, or an OCSP responder. See Chapter 9, Publishing Certificates and CRLs for details about setting up publishing. 7.3.1. Configuring Issuing Points Issuing points define which certificates are included in a new CRL. A master CRL issuing point is created by default for a master CRL containing a list of all revoked certificates for the Certificate Manager. To create a new issuing point, do the following: Open the Certificate System Console. In the Configuration tab, expand Certificate Manager from the left navigation menu. Then select CRL Issuing Points . To edit an issuing point, select the issuing point, and click Edit . The only parameters which can be edited are the name of the issuing point and whether the issuing point is enabled or disabled. To add an issuing point, click Add . The CRL Issuing Point Editor window opens. Figure 7.2. CRL Issuing Point Editor Note If some fields do not appear large enough to read the content, expand the window by dragging one of the corners. Fill in the following fields: Enable . Enables the issuing point if selected; deselect to disable. CRL Issuing Point name . Gives the name for the issuing point; spaces are not allowed. Description . Describes the issuing point. Click OK . To view and configure a new issuing point, close the CA Console, then open the Console again. The new issuing point is listed below the CRL Issuing Points entry in the navigation tree. Configure CRLs for the new issuing point, and set up any CRL extensions that will be used with the CRL. See Section 7.3.2, "Configuring CRLs for Each Issuing Point" for details on configuring an issuing point. See Section 7.3.3, "Setting CRL Extensions" for details on setting up the CRL extensions. All the CRLs created appear on the Update Revocation List page of the agent services pages. Note pkiconsole is being deprecated. 7.3.2. Configuring CRLs for Each Issuing Point Information, such as the generation interval, the CRL version, CRL extensions, and the signing algorithm, can all be configured for the CRLs for the issuing point. The CRLs must be configured for each issuing point. Open the CA console. In the navigation tree, select Certificate Manager , and then select CRL Issuing Points . Select the issuing point name below the Issuing Points entry. Configure how and how often the CRLs are updated by supplying information in the Update tab for the issuing point. This tab has two sections, Update Schema and Update Frequency . The Update Schema section has the following options: Enable CRL generation . This checkbox sets whether CRLs are generated for that issuing point. Generate full CRL every # delta(s) . This field sets how frequently CRLs are created in relation to the number of changes. Extend update time in full CRLs . This provides an option to set the nextUpdate field in the generated CRLs. The nextUpdate parameter shows the date when the CRL is issued, regardless of whether it is a full or delta CRL. When using a combination of full and delta CRLs, enabling Extend update time in full CRLs will make the nextUpdate parameter in a full CRL show when the full CRL will be issued. Otherwise, the nextUpdate parameter in the full CRL will show when the delta CRL will be issued, since the delta will be the CRL to be issued. The Update Frequency section sets the different intervals when the CRLs are generated and issued to the directory. Every time a certificate is revoked or released from hold . This sets the Certificate Manager to generate the CRL every time it revokes a certificate. The Certificate Manager attempts to issue the CRL to the configured directory whenever it is generated. Generating a CRL can be time consuming if the CRL is large. Configuring the Certificate Manager to generate CRLs every time a certificate is revoked may engage the server for a considerable amount of time; during this time, the server will not be able to update the directory with any changes it receives. This setting is not recommended for a standard installation. This option should be selected to test revocation immediately, such as testing whether the server issues the CRL to a flat file. Update the CRL at . This field sets a daily time when the CRL should be updated. To specify multiple times, enter a comma-separate list of times, such as 01:50,04:55,06:55 . To enter a schedule for multiple days, enter a comma-separated list to set the times within the same day, and then a semicolon separated list to identify times for different days. For example, this sets revocation on Day 1 of the cycle at 1:50am, 4:55am, and 6:55am and then Day 2 at 2am, 5am, and 5pm: Update the CRL every . This checkbox enables generating CRLs at the interval set in the field. For example, to issue CRLs every day, select the checkbox, and enter 1440 in this field. update grace period . If the Certificate Manager updates the CRL at a specific frequency, the server can be configured to have a grace period to the update time to allow time to create the CRL and issue it. For example, if the server is configured to update the CRL every 20 minutes with a grace period of 2 minutes, and if the CRL is updated at 16:00, the CRL is updated again at 16:18. Important Due to a known issue, when currently setting full and delta Certificate Revocation List schedules, the Update CRL every time a certificate is revoked or released from hold option also requires you to fill out the two grace period settings. Thus, in order to select this option you need to first select the Update CRL every option and enter a number for the update grace period # minutes box. The Cache tab sets whether caching is enabled and the cache frequency. Figure 7.3. CRL Cache Tab Enable CRL cache . This checkbox enables the cache, which is used to create delta CRLs. If the cache is disabled, delta CRLs will not be created. For more information about the cache, see Section 7.1, "About Revoking Certificates" . Update cache every . This field sets how frequently the cache is written to the internal database. Set to 0 to have the cache written to the database every time a certificate is revoked. Enable cache recovery . This checkbox allows the cache to be restored. Enable CRL cache testing . This checkbox enables CRL performance testing for specific CRL issuing points. CRLs generated with this option should not be used in deployed CAs, as CRLs issued for testing purposed contain data generated solely for the purpose of performance testing. The Format tab sets the formatting and contents of the CRLs that are created. There are two sections, CRL Format and CRL Contents . Figure 7.4. CRL Format Tab The CRL Format section has two options: Revocation list signing algorithm is a drop down list of allowed ciphers to encrypt the CRL. Allow extensions for CRL v2 is a checkbox which enabled CRL v2 extensions for the issuing point. If this is enabled, set the required CRL extensions described in Section 7.3.3, "Setting CRL Extensions" . Note Extensions must be turned on to create delta CRLs. The CRL Contents section has three checkboxes which set what types of certificates to include in the CRL: Include expired certificates . This includes revoked certificates that have expired. If this is enabled, information about revoked certificates remains in the CRL after the certificate expires. If this is not enabled, information about revoked certificates is removed when the certificate expires. CA certificates only . This includes only CA certificates in the CRL. Selecting this option creates an Authority Revocation List (ARL), which lists only revoked CA certificates. Certificates issued according to profiles . This only includes certificates that were issued according to the listed profiles; to specify multiple profiles, enter a comma-separated list. Click Save . Extensions are allowed for this issuing point and can be configured. See Section 7.3.3, "Setting CRL Extensions" for details. Note pkiconsole is being deprecated. 7.3.3. Setting CRL Extensions Note Extensions only need configured for an issuing point if the Allow extensions for CRLs v2 checkbox is selected for that issuing point. When the issuing point is created, three extensions are automatically enabled: CRLReason , InvalidityDate , and CRLNumber . Other extensions are available but are disabled by default. These can be enabled and modified. For more information about the available CRL extensions, see Section B.4.2, "Standard X.509 v3 CRL Extensions Reference" . To configure CRL extensions, do the following: Open the CA console. In the navigation tree, select Certificate Manager , and then select CRL Issuing Points . Select the issuing point name below the Issuing Points entry, and select the CRL Extension entry below the issuing point. The right pane shows the CRL Extensions Management tab, which lists configured extensions. Figure 7.5. CRL Extensions To modify a rule, select it, and click Edit/View . Most extensions have two options, enabling them and setting whether they are critical. Some require more information. Supply all required values. See Section B.4.2, "Standard X.509 v3 CRL Extensions Reference" for complete information about each extension and the parameters for those extensions. Click OK . Click Refresh to see the updated status of all the rules. Note pkiconsole is being deprecated. 7.3.4. Setting a CA to Use a Different Certificate to Sign CRLs For instruction on how to configure this feature by editing the CS.cfg file, see the Setting a CA to Use a Different Certificate to Sign CRLs section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . 7.3.5. Generating CRLs from Cache By default, CRLs are generated from the CA's internal database. However, revocation information can be collected as the certificates are revoked and kept in memory. This revocation information can then be used to update CRLs from memory. Bypassing the database searches that are required to generate the CRL from the internal database significantly improves performance. Note Because of the performance enhancement from generating CRLs from cache, enable the enableCRLCache parameter in most environments. However, the Enable CRL cache testing parameter should not be enabled in a production environment. 7.3.5.1. Configuring CRL Generation from Cache in the Console Note pkiconsole is being deprecated. Open the console. In the Configuration tab, expand the Certificate Manager folder and the CRL Issuing Points subfolder. Select the MasterCRL node. Select Enable CRL cache . Save the changes. 7.3.5.2. Configuring CRL Generation from Cache in CS.cfg For instruction on how to configure this feature by editing the CS.cfg file, see the Configuring CRL Generation from Cache in CS.cfg section in the Red Hat Certificate System Planning, Installation, and Deployment Guide .
[ "pkiconsole https://server.example.com:8443/ca", "pkiconsole https://server.example.com:8443/ca", "01:50,04:55,06:55;02:00,05:00,17:00", "pkiconsole https://server.example.com:8443/ca", "pkiconsole https://server.example.com:8443/ca" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/issuing_crls
Chapter 6. Managing virtual machines in the web console
Chapter 6. Managing virtual machines in the web console To manage virtual machines in a graphical interface on a RHEL 8 host, you can use the Virtual Machines pane in the RHEL 8 web console. 6.1. Overview of virtual machine management by using the web console The RHEL 8 web console is a web-based interface for system administration. As one of its features, the web console provides a graphical view of virtual machines (VMs) on the host system, and makes it possible to create, access, and configure these VMs. Note that to use the web console to manage your VMs on RHEL 8, you must first install a web console plug-in for virtualization. steps For instructions on enabling VMs management in your web console, see Setting up the web console to manage virtual machines . For a comprehensive list of VM management actions that the web console provides, see Virtual machine management features available in the web console . For a list of features that are currently not available in the web console but can be used in the virt-manager application, see Differences between virtualization features in Virtual Machine Manager and the web console . 6.2. Setting up the web console to manage virtual machines Before using the RHEL 8 web console to manage virtual machines (VMs), you must install the web console virtual machine plug-in on the host. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Install the cockpit-machines plug-in. Verification Log in to the RHEL 8 web console. For details, see Logging in to the web console . If the installation was successful, Virtual Machines appears in the web console side menu. Additional resources Managing systems by using the RHEL 8 web console 6.3. Renaming virtual machines by using the web console You might require renaming an existing virtual machine (VM) to avoid naming conflicts or assign a new unique name based on your use case. To rename the VM, you can use the RHEL web console. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The web console VM plug-in is installed on your system . The VM is shut down. Procedure In the Virtual Machines interface, click the Menu button ... of the VM that you want to rename. A drop-down menu appears with controls for various VM operations. Click Rename . The Rename a VM dialog appears. In the New name field, enter a name for the VM. Click Rename . Verification Check that the new VM name has appeared in the Virtual Machines interface. 6.4. Virtual machine management features available in the web console By using the RHEL 8 web console, you can perform the following actions to manage the virtual machines (VMs) on your system. Table 6.1. VM management tasks that you can perform in the RHEL 8 web console Task For details, see Create a VM and install it with a guest operating system Creating virtual machines and installing guest operating systems by using the web console Delete a VM Deleting virtual machines by using the web console Start, shut down, and restart the VM Starting virtual machines by using the web console and Shutting down and restarting virtual machines by using the web console Connect to and interact with a VM using a variety of consoles Interacting with virtual machines by using the web console View a variety of information about the VM Viewing virtual machine information by using the web console Adjust the host memory allocated to a VM Adding and removing virtual machine memory by using the web console Manage network connections for the VM Using the web console for managing virtual machine network interfaces Manage the VM storage available on the host and attach virtual disks to the VM Managing storage for virtual machines by using the web console Configure the virtual CPU settings of the VM Managing virtal CPUs by using the web console Live migrate a VM Live migrating a virtual machine by using the web console Manage host devices Managing host devices by using the web console Manage virtual optical drives Managing virtual optical drives Attach watchdog device Attaching a watchdog device to a virtual machine by using the web console 6.5. Differences between virtualization features in Virtual Machine Manager and the web console The Virtual Machine Manager ( virt-manager ) application is supported in RHEL 8, but has been deprecated. The web console is intended to become its replacement in a subsequent major release. It is, therefore, recommended that you get familiar with the web console for managing virtualization in a GUI. However, in RHEL 8, some VM management tasks can only be performed in virt-manager or the command line. The following table highlights the features that are available in virt-manager but not available in the RHEL 8.0 web console. If a feature is available in a later minor version of RHEL 8, the minimum RHEL 8 version appears in the Support in web console introduced column. Table 6.2. VM managemennt tasks that cannot be performed using the web console in RHEL 8.0 Task Support in web console introduced Alternative method by using CLI Setting a virtual machine to start when the host boots RHEL 8.1 virsh autostart Suspending a virtual machine RHEL 8.1 virsh suspend Resuming a suspended virtual machine RHEL 8.1 virsh resume Creating file-system directory storage pools RHEL 8.1 virsh pool-define-as Creating NFS storage pools RHEL 8.1 virsh pool-define-as Creating physical disk device storage pools RHEL 8.1 virsh pool-define-as Creating LVM volume group storage pools RHEL 8.1 virsh pool-define-as Creating partition-based storage pools CURRENTLY UNAVAILABLE virsh pool-define-as Creating GlusterFS-based storage pools CURRENTLY UNAVAILABLE virsh pool-define-as Creating vHBA-based storage pools with SCSI devices CURRENTLY UNAVAILABLE virsh pool-define-as Creating Multipath-based storage pools CURRENTLY UNAVAILABLE virsh pool-define-as Creating RBD-based storage pools CURRENTLY UNAVAILABLE virsh pool-define-as Creating a new storage volume RHEL 8.1 virsh vol-create Adding a new virtual network RHEL 8.1 virsh net-create or virsh net-define Deleting a virtual network RHEL 8.1 virsh net-undefine Creating a bridge from a host machine's interface to a virtual machine CURRENTLY UNAVAILABLE virsh iface-bridge Creating a snapshot CURRENTLY UNAVAILABLE virsh snapshot-create-as Reverting to a snapshot CURRENTLY UNAVAILABLE virsh snapshot-revert Deleting a snapshot CURRENTLY UNAVAILABLE virsh snapshot-delete Cloning a virtual machine RHEL 8.4 virt-clone Migrating a virtual machine to another host machine RHEL 8.5 virsh migrate Attaching a host device to a VM RHEL 8.5 virt-xml --add-device Removing a host device from a VM RHEL 8.5 virt-xml --remove-device Additional resources Getting started with Virtual Machine Manager in RHEL 7 ( Deprecated in RHEL 8 and later )
[ "yum install cockpit-machines" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_virtualization/managing-virtual-machines-in-the-web-console_configuring-and-managing-virtualization
Chapter 5. Preparing a system with UEFI Secure Boot enabled to install and boot RHEL beta releases
Chapter 5. Preparing a system with UEFI Secure Boot enabled to install and boot RHEL beta releases To enhance the security of your operating system, use the UEFI Secure Boot feature for signature verification when booting a Red Hat Enterprise Linux Beta release on systems having UEFI Secure Boot enabled. 5.1. UEFI Secure Boot and RHEL Beta releases UEFI Secure Boot requires that the operating system kernel is signed with a recognized private key. UEFI Secure Boot then verifies the signature using the corresponding public key. For Red Hat Enterprise Linux Beta releases, the kernel is signed with a Red Hat Beta-specific private key. UEFI Secure Boot attempts to verify the signature using the corresponding public key, but because the hardware does not recognize the Beta private key, Red Hat Enterprise Linux Beta release system fails to boot. Therefore, to use UEFI Secure Boot with a Beta release, add the Red Hat Beta public key to your system using the Machine Owner Key (MOK) facility. 5.2. Adding a Beta public key for UEFI Secure Boot This section contains information about how to add a Red Hat Enterprise Linux Beta public key for UEFI Secure Boot. Prerequisites The UEFI Secure Boot is disabled on the system. The Red Hat Enterprise Linux Beta release is installed, and Secure Boot is disabled even after system reboot. You are logged in to the system, and the tasks in the Initial Setup window are complete. Procedure Begin to enroll the Red Hat Beta public key in the system's Machine Owner Key (MOK) list: USD(uname -r) is replaced by the kernel version - for example, 4.18.0-80.el8.x86_64 . Enter a password when prompted. Reboot the system and press any key to continue the startup. The Shim UEFI key management utility starts during the system startup. Select Enroll MOK . Select Continue . Select Yes and enter the password. The key is imported into the system's firmware. Select Reboot . Enable Secure Boot on the system. 5.3. Removing a Beta public key If you plan to remove the Red Hat Enterprise Linux Beta release, and install a Red Hat Enterprise Linux General Availability (GA) release, or a different operating system, then remove the Beta public key. The procedure describes how to remove a Beta public key. Procedure Begin to remove the Red Hat Beta public key from the system's Machine Owner Key (MOK) list: Enter a password when prompted. Reboot the system and press any key to continue the startup. The Shim UEFI key management utility starts during the system startup. Select Reset MOK . Select Continue . Select Yes and enter the password that you had specified in step 2. The key is removed from the system's firmware. Select Reboot .
[ "mokutil --import /usr/share/doc/kernel-keys/USD(uname -r)/kernel-signing-ca.cer", "mokutil --reset" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/booting-a-beta-system-with-uefi-secure-boot_rhel-installer
Chapter 2. RHOSP server group configuration for HA instances
Chapter 2. RHOSP server group configuration for HA instances Create an instance server group before you create the RHOSP HA cluster node instances. Group the instances by affinity policy. If you configure multiple clusters, ensure that you have only one server group per cluster. The affinity policy you set for the server group can determine whether the cluster remains operational if the hypervisor fails. The default affinity policy is affinity . With this affinity policy, all of the cluster nodes could be created on the same RHOSP hypervisor. In this case, if the hypervisor fails the entire cluster fails. For this reason, set an affinity policy for the server group of anti-affinity or soft-anti-affinity . With an affinity policy of anti-affinity , the server group allows only one cluster node per Compute node. Attempting to create more cluster nodes than Compute nodes generates an error. While this configuration provides the highest protection against RHOSP hypervisor failures, it may require more resources to deploy large clusters than you have available. With an affinity policy of soft-anti-affinity , the server group distributes cluster nodes as evenly as possible across all Compute nodes. Although this provides less protection against hypervisor failures than a policy of anti-affinity , it provides a greater level of high availability than an affinity policy of affinity . Determining the server group affinity policy for your deployment requires balancing your cluster needs against the resources you have available by taking the following cluster components into account: The number of nodes in the cluster The number of RHOSP Compute nodes available The number of nodes required for cluster quorum to retain cluster operations For information about affinity and creating an instance server group, Compute scheduler filters and the Command Line Interface Reference .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/ref_recommended-rhosp-server-group-configuration_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform
Chapter 1. Getting support for Red Hat Advanced Cluster Security for Kubernetes
Chapter 1. Getting support for Red Hat Advanced Cluster Security for Kubernetes This topic provides information about the technical support for Red Hat Advanced Cluster Security for Kubernetes. If you experience difficulty with a procedure described in this documentation, or with Red Hat Advanced Cluster Security for Kubernetes in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. If you have a suggestion for improving the documentation or have identified an error, create a Jira issue against the Red Hat Advanced Cluster Security for Kubernetes product for the Documentation component. Ensure that you include specific details such as the section name and the version of Red Hat Advanced Cluster Security for Kubernetes for us to manage your feedback effectively. 1.1. About the Red Hat Knowledgebase The Red Hat Knowledgebase provides rich content aimed at helping you make the most of Red Hat products and technologies. The Red Hat Knowledgebase consists of articles, product documentation, and videos outlining best practices on installing, configuring, and using Red Hat products. In addition, you can search for solutions to known issues, each providing concise root cause descriptions and remedial steps. 1.2. Searching the Red Hat Knowledgebase In the event of an Red Hat Advanced Cluster Security for Kubernetes issue, you can perform an initial search to determine if a solution already exists within the Red Hat Knowledgebase. Prerequisites You have a Red Hat Customer Portal account. Procedure Log in to the Red Hat Customer Portal . In the main Red Hat Customer Portal search field, input keywords and strings relating to the problem, including: Red Hat Advanced Cluster Security for Kubernetes components (such as etcd ) Related procedure (such as installation ) Warnings, error messages, and other outputs related to explicit failures Click Search . Select the Red Hat Advanced Cluster Security for Kubernetes product filter. Select the Knowledgebase content type filter. 1.3. Generating a diagnostic bundle You can generate a diagnostic bundle and send that data to enable the support team to provide insights into the status and health of Red Hat Advanced Cluster Security for Kubernetes components. Note The diagnostic bundle is unencrypted, and depending upon the number of clusters in your environment, the bundle size is between 100 KB and 1 MB. 1.3.1. Generating a diagnostic bundle by using the RHACS portal You can generate a diagnostic bundle by using the system health dashboard in the RHACS portal. Prerequisites To generate a diagnostic bundle, you need read permission for the Administration resource. Procedure In the RHACS portal, select Platform Configuration System Health . On the System Health view header, click Generate Diagnostic Bundle . For the Filter by clusters drop-down menu, select the clusters for which you want to generate the diagnostic data. For Filter by starting time , specify the date and time (in UTC format) from which you want to include the diagnostic data. Click Download Diagnostic Bundle . 1.3.2. Generating a diagnostic bundle by using the roxctl CLI You can generate a diagnostic bundle with the Red Hat Advanced Cluster Security for Kubernetes (RHACS) administrator password or API token and central address by using the roxctl CLI. Prerequisites To generate a diagnostic bundle, you need read permission for the Administration resource. You must have configured the RHACS administrator password or API token and central address. Procedure To generate a diagnostic bundle by using the RHACS administrator password, perform the following steps: Run the following command to configure the ROX_PASSWORD and ROX_CENTRAL_ADDRESS environment variables: USD export ROX_PASSWORD= <rox_password> && export ROX_CENTRAL_ADDRESS= <address>:<port_number> 1 1 For <rox_password> , specify the RHACS administrator password. Run the following command to generate a diagnostic bundle by using the RHACS administrator password: USD roxctl -e "USDROX_CENTRAL_ADDRESS" -p "USDROX_PASSWORD" central debug download-diagnostics To generate a diagnostic bundle by using the API token, perform the following steps: Run the following command to configure the ROX_API_TOKEN environment variable: USD export ROX_API_TOKEN= <api_token> Run the following command to generate a diagnostic bundle by using the API token: USD roxctl -e "USDROX_CENTRAL_ADDRESS" central debug download-diagnostics 1.4. Submitting a support case Prerequisites You have access to the cluster. You have a Red Hat Customer Portal account. You have a Red Hat OpenShift Platform Plus subscription. Procedure Log in to the Red Hat Customer Portal and select SUPPORT CASES Open a case . Select the appropriate category for your issue (such as Defect / Bug ), product ( Red Hat Advanced Cluster Security for Kubernetes ), and product version ( 4.5 , if this is not already autofilled). Review the list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. If the suggested articles do not address the issue, click Continue . Enter a concise but descriptive problem summary and further details about the symptoms being experienced, as well as your expectations. Review the updated list of suggested Red Hat Knowledgebase solutions for a potential match against the problem that is being reported. The list is refined as you provide more information during the case creation process. If the suggested articles do not address the issue, click Continue . Ensure that the account information presented is as expected, and if not, amend accordingly. Upload the generated diagnostic bundle and click Continue . Input relevant case management details and click Continue . Preview the case details and click Submit .
[ "export ROX_PASSWORD= <rox_password> && export ROX_CENTRAL_ADDRESS= <address>:<port_number> 1", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" -p \"USDROX_PASSWORD\" central debug download-diagnostics", "export ROX_API_TOKEN= <api_token>", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central debug download-diagnostics" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/support/getting-support
Chapter 11. Red Hat build of OptaPlanner on Red Hat build of Quarkus: a vaccination appointment scheduler quick start guide
Chapter 11. Red Hat build of OptaPlanner on Red Hat build of Quarkus: a vaccination appointment scheduler quick start guide You can use the OptaPlanner vaccination appointment scheduler quick start to develop a vaccination schedule that is both efficient and fair. The vaccination appointment scheduler uses artificial intelligence (AI) to prioritize people and allocate time slots based on multiple constraints and priorities. Prerequisites OpenJDK 11 or later is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. An IDE, such as IntelliJ IDEA, VS Code, Eclipse, or NetBeans is available. You have created a Quakus OptaPlanner project as described in Chapter 6, Getting Started with OptaPlanner and Quarkus . 11.1. How the OptaPlanner vaccination appointment scheduler works There are two main approaches to scheduling appointments. The system can either let a person choose an appointment slot (user-selects) or the system assigns a slot and tells the person when and where to attend (system-automatically-assigns). The OptaPlanner vaccination appointment scheduler uses the system-automatically-assigns approach. With the OptaPlanner vaccination appointment scheduler, you can create an application where people provide their information to the system and the system assigns an appointment. Characteristics of this approach: Appointment slots are allocated based on priority. The system allocates the best appointment time and location based on preconfigured planning constraints. The system is not overwhelmed by a large number of users competing for a limited number of appointments. This approach solves the problem of vaccinating as many people as possible by using planning constraints to create a score for each person. The person's score determines when they get an appointment. The higher the person's score, the better chance they have of receiving an earlier appointment. 11.1.1. OptaPlanner vaccination appointment scheduler constraints OptaPlanner vaccination appointment scheduler constraints are either hard, medium, or soft: Hard constraints cannot be broken. If any hard constraint is broken, the plan is unfeasible and cannot be executed: Capacity: Do not over-book vaccine capacity at any time at any location. Vaccine max age: If a vaccine has a maximum age, do not administer it to people who at the time of the first dose vaccination are older than the vaccine maximum age. Ensure people are given a vaccine type appropriate for their age. For example, do not assign a 75 year old person an appointment for a vaccine that has a maximum age restriction of 65 years. Required vaccine type: Use the required vaccine type. For example, the second dose of a vaccine must be the same vaccine type as the first dose. Ready date: Administer the vaccine on or after the specified date. For example, if a person receives a second dose, do not administer it before the recommended earliest possible vaccination date for the specific vaccine type, for example 26 days after the first dose. Due date: Administer the vaccine on or before the specified date. For example, if a person receives a second dose, administer it before the recommended vaccination final due date for the specific vaccine, for example three months after the first dose. Restrict maximum travel distance: Assign each person to one of a group of vaccination centers nearest to them. This is typically one of three centers. This restriction is calculated by travel time, not distance, so a person that lives in an urban area usually has a lower maximum distance to travel than a person living in a rural area. Medium constraints decide who does not get an appointment when there is not enough capacity to assign appointments to everyone. This is called overconstrained planning: Schedule second dose vaccinations: Do not leave any second dose vaccination appointments unassigned unless the ideal date falls outside of the planning window. Schedule people based on their priority rating: Each person has a priority rating. This is typically their age but it can be much higher if they are, for example, a health care worker. Leave only people with the lowest priority ratings unassigned. They will be considered in the run. This constraint is softer than the constraint because the second dose is always prioritized over priority rating. Soft constraints should not be broken: Preferred vaccination center: If a person has a preferred vaccination center, give them an appointment at that center. Distance: Minimize the distance that a person must travel to their assigned vaccination center. Ideal date: Administer the vaccine on or as close to the specified date as possible. For example, if a person receives a second dose, administer it on the ideal date for the specific vaccine, for example 28 days after the first dose. This constraint is softer than the distance constraint to avoid sending people halfway across the country just to be one day closer to their ideal date. Priority rating: Schedule people with a higher priority rating earlier in the planning window. This constraint is softer than the distance constraint to avoid sending people halfway across the country. This constraint is also softer than the ideal date constraint because the second dose is prioritized over priority rating. Hard constraints are weighted against other hard constraints. Soft constraints are weighted against other soft constraints. However, hard constraints always take precedence over medium and soft constraints. If a hard constraint is broken, then the plan is not feasible. But if no hard constraints are broken then soft and medium constraints are considered in order to determine priority. Because there are often more people than available appointment slots, you must prioritize. Second dose appointments are always assigned first to avoid creating a backlog that would overwhelm your system later. After that, people are assigned based on their priority rating. Everyone starts with a priority rating that is their age. Doing this prioritizes older people over younger people. After that, people that are in specific priority groups receive, for example, a few hundred extra points. This varies based on the priority of their group. For example, nurses might receive an extra 1000 points. This way, older nurses are prioritized over younger nurses and young nurses are prioritized over people who are not nurses. The following table illustrates this concept: Table 11.1. Priority rating table Age Job Priority rating 60 nurse 1060 33 nurse 1033 71 retired 71 52 office worker 52 11.1.2. The OptaPlanner solver At the core of OptaPlanner is the solver, the engine that takes the problem data set and overlays the planning constraints and configurations. The problem data set includes all of the information about the people, the vaccines, and the vaccination centers. The solver works through the various combinations of data and eventually determines an optimized appointment schedule with people assigned to vaccination appointments at a specific center. The following illustration shows a schedule that the solver created: 11.1.3. Continuous planning Continuous planning is the technique of managing one or more upcoming planning periods at the same time and repeating that process monthly, weekly, daily, hourly, or even more frequently. The planning window advances incrementally by a specified interval. The following illustration shows a two week planning window that is updated daily: The two week planning window is divided in half. The first week is in the published state and the second week is in the draft state. People are assigned to appointments in both the published and draft parts of the planning window. However, only people in the published part of the planning window are notified of their appointments. The other appointments can still change easily in the run. Doing this gives OptaPlanner the flexibility to change the appointments in the draft part when you run the solver again, if necessary. For example, if a person who needs a second dose has a ready date of Monday and an ideal date of Wednesday, OptaPlanner does not have to give them an appointment for Monday if you can prove OptaPlanner can demonstrate that it can give them a draft appointment later in the week. You can determine the size of the planning window but just be aware of the size of the problem space. The problem space is all of the various elements that go into creating the schedule. The more days you plan ahead, the larger the problem space. 11.1.4. Pinned planning entities If you are continuously planning on a daily basis, there will be appointments within the two week period that are already allocated to people. To ensure that appointments are not double-booked, OptaPlanner marks existing appointments as allocated by pinning them. Pinning is used to anchor one or more specific assignments and force OptaPlanner to schedule around those fixed assignments. A pinned planning entity, such as an appointment, does not change during solving. Whether an entity is pinned or not is determined by the appointment state. An appointment can have five states : Open , Invited , Accepted , Rejected , or Rescheduled . Note You do not actually see these states directly in the quick start demo code because the OptaPlanner engine is only interested in whether the appointment is pinned or not. You need to be able to plan around appointments that have already been scheduled. An appointment with the Invited or Accepted state is pinned. Appointments with the Open , Reschedule , and Rejected state are not pinned and are available for scheduling. In this example, when the solver runs it searches across the entire two week planning window in both the published and draft ranges. The solver considers any unpinned entities, appointments with the Open , Reschedule , or Rejected states, in addition to the unscheduled input data, to find the optimal solution. If the solver is run daily, you will see a new day added to the schedule before you run the solver. Notice that the appointments on the new day have been assigned and Amy and Edna who were previously scheduled in the draft part of the planning window are now scheduled in the published part of the window. This was possible because Gus and Hugo requested a reschedule. This will not cause any confusion because Amy and Edna were never notified about their draft dates. Now, because they have appointments in the published section of the planning window, they will be notified and asked to accept or reject their appointments, and their appointments are now pinned. 11.2. Downloading and running the OptaPlanner vaccination appointment scheduler Download the OptaPlanner vaccination appointment scheduler quick start archive, start it in Quarkus development mode, and view the application in a browser. Quarkus development mode enables you to make changes and update your application while it is running. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 Kogito and OptaPlanner 8 Decision Services Quickstarts ( rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip ). Extract the rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip file. Navigate to the optaplanner-quickstarts-8.13.0.Final-redhat-00013 directory. Navigate to the optaplanner-quickstarts-8.13.0.Final-redhat-00013/use-cases/vaccination-scheduling directory. Enter the following command to start the OptaPlanner vaccination appointment scheduler in development mode: USD mvn quarkus:dev To view the OptaPlanner vaccination appointment scheduler, enter the following URL in a web browser. To run the OptaPlanner vaccination appointment scheduler, click Solve . Make changes to the source code then press the F5 key to refresh your browser. Notice that the changes that you made are now available. 11.3. Package and run the OptaPlanner vaccination appointment scheduler When you have completed development work on the OptaPlanner vaccination appointment scheduler in quarkus:dev mode, run the application as a conventional jar file. Prerequisites You have downloaded the OptaPlanner vaccination appointment scheduler quick start. For more information, see Section 11.2, "Downloading and running the OptaPlanner vaccination appointment scheduler" . Procedure Navigate to the /use-cases/vaccination-scheduling directory. To compile the OptaPlanner vaccination appointment scheduler, enter the following command: USD mvn package To run the compiled OptaPlanner vaccination appointment scheduler, enter the following command: USD java -jar ./target/*-runner.jar Note To run the application on port 8081, add -Dquarkus.http.port=8081 to the preceding command. To start the OptaPlanner vaccination appointment scheduler, enter the following URL in a web browser. 11.4. Run the OptaPlanner vaccination appointment scheduler as a native executable To take advantage of the small memory footprint and access speeds that Quarkus offers, compile the OptaPlanner vaccination appointment scheduler in Quarkus native mode. Procedure Install GraalVM and the native-image tool. For information, see Configuring GraalVMl on the Quarkus website. Navigate to the /use-cases/vaccination-scheduling directory. To compile the OptaPlanner vaccination appointment scheduler natively, enter the following command: USD mvn package -Dnative -DskipTests To run the native executable, enter the following command: USD ./target/*-runner To start the OptaPlanner vaccination appointment scheduler, enter the following URL in a web browser. 11.5. Additional resources Vaccination appointment scheduling video
[ "mvn quarkus:dev", "http://localhost:8080/", "mvn package", "java -jar ./target/*-runner.jar", "http://localhost:8080/", "mvn package -Dnative -DskipTests", "./target/*-runner", "http://localhost:8080/" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_process_automation_manager/assembly-optaplanner-vaccination_optaplanner-quickstarts
13.7. Deleting Indexes
13.7. Deleting Indexes This section describes how to remove attributes and index types from the index. 13.7.1. Deleting an Attribute from the Default Index Entry When using the default settings of Directory Server, several attributes listed in the default index entry, such as sn , are indexed. The following attributes are part of the default index: Table 13.1. Default Index Attributes aci cn entryusn givenName mail mailAlternateAddress mailHost member memberOf nsUniqueId ntUniqueId ntUserDomainId numsubordinates objectclass owner parentid seeAlso sn telephoneNumber uid uniquemember Warning Removing system indexes can significantly affect the Directory Server performance. For example, to remove the sn attribute from the default index: Remove the attribute from the cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config entry: If you do not remove the attribute from this entry, the index for the sn attribute is automatically recreated and corrupted after the server is restarted. Remove the cn= attribute_name ,cn=index,cn=userRoot,cn=ldbm database,cn=plugins,cn=config entry. For details, see Section 13.7.2, "Removing an Attribute from the Index" 13.7.2. Removing an Attribute from the Index In certain situations you want to remove an attribute from the index. This section describe the procedure using the command line and using the web console. 13.7.2.1. Removing an Attribute from the Index Using the Command Line To remove an attribute from the index: If the attribute to remove is listed in the cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config default index entry, remove it from this entry first. For details, see Section 13.7.1, "Deleting an Attribute from the Default Index Entry" . Remove the attribute from the index. For example: After deleting the entry, Directory Server no longer maintains the index for the attribute. Recreate the attribute index. See Section 13.3, "Creating New Indexes to Existing Databases" . 13.7.2.2. Removing an Attribute from the Index Using the Web Console To remove an attribute from the index: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Select the suffix entry. Open the Indexes tab. Click the Actions button to the attribute for which you want to remove the index, and select Delete Index . Click Yes to confirm. 13.7.3. Deleting Index Types Using the Command Line For example, to remove the sub index type of the sn attribute from the index: Remove the index type: After deleting the index entry, Directory Server no longer maintains the substring index for the attribute. Recreate the attribute index. See Section 13.3, "Creating New Indexes to Existing Databases" . 13.7.4. Removing Browsing Indexes This section describes how to remove browsing entries from a database. 13.7.4.1. Removing Browsing Indexes Using the Command Line The entries for an alphabetical browsing index and virtual list view (VLV) are the same. This section describes the steps involved in removing browsing indexes. To remove a browsing index or virtual list view index using the command line: Remove the browsing index entries from the cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config entry. For example: After deleting the two browsing index entries, Directory Server no longer maintains these indexed. Recreate the attribute index. See Section 13.3, "Creating New Indexes to Existing Databases" .
[ "ldapdelete -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x cn=sn,cn=default indexes,cn=config,cn=ldbm database,cn=plugins,cn=config", "ldapdelete -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x cn=sn,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config", "ldapmodify -D \"cn=Directory Manager\" -W -x dn: cn=sn,cn=index,cn= database_name ,cn=ldbm database,cn=plugins,cn=config changetype: modify delete: nsIndexType nsIndexType: sub", "ldapdelete -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x \"cn=MCC ou=People dc=example dc=com,cn=userRoot,cn=ldbm database,cn=plugins,cn=config\" \"cn=by MCC ou=People dc=example dc=com,cn=MCC ou=People dc=example dc=com,cn=userRoot,cn=ldbm database,cn=plugins,cn=config\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Managing_Indexes-Deleting_Indexes
Chapter 3. Distribution selection
Chapter 3. Distribution selection Red Hat provides several distributions of Red Hat build of OpenJDK. This module helps you select the distribution that is right for your needs. All distributions of OpenJDK contain the JDK Flight Recorder (JFR) feature. This feature produces diagnostics and profiling data that can be consumed by other applications, such as JDK Mission Control (JMC). Red Hat build of OpenJDK RPMs for RHEL 8 RPM distributions of Red Hat build of OpenJDK 8, Red Hat build of OpenJDK 11, and Red Hat build of OpenJDK 8 for RHEL 8. RHEL Portable Red Hat build of OpenJDK 8 JRE archive distribution for RHEL 7 and 8 hosts. Red Hat build of OpenJDK 8 portable archive for RHEL Portable Red Hat build of OpenJDK 8 archive distribution for RHEL 7 and 8 hosts. Red Hat build of OpenJDK 11 JRE portable archive for RHEL Portable Red Hat build of OpenJDK 11 JRE archive distribution for RHEL 7 and 8 hosts. Red Hat build of OpenJDK 11 portable archive for RHEL Portable Red Hat build of OpenJDK 11 archive distribution for RHEL 7 and 8 hosts. Red Hat build of OpenJDK 8 JRE portable archive for RHEL Portable Red Hat build of OpenJDK 8 JRE archive distribution for RHEL 7 and 8 hosts. Red Hat build of OpenJDK 8 portable archive for RHEL Portable Red Hat build of OpenJDK 8 archive distribution for RHEL 7 and 8 hosts. Red Hat build of OpenJDK archive for Windows Red Hat build of OpenJDK 8, Red Hat build of OpenJDK 11, and Red Hat build of OpenJDK 8 distributions for all supported Windows hosts. Recommended for cases where multiple Red Hat build of OpenJDK versions may be installed on a host. This distribution includes the following: Java Web Start Mission Control Red Hat build of OpenJDK installers for Windows Red Hat build of OpenJDK 8, Red Hat build of OpenJDK 11, and Red Hat build of OpenJDK 8 MSI installers for all supported Windows hosts. Optionally installs Java Web Start and sets environment variables. Suitable for system wide installs of a single Red Hat build of OpenJDK version. Additional resources For more information about the JDK Flight Recorder (JFR), see Introduction to JDK Flight Recorder . For more information about the JDK Flight Recorder (JFR), see Introduction to JDK Mission Control . JDK Mission Control is available for RHEL with Red Hat Software Collections 3.2 .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/getting_started_with_red_hat_build_of_openjdk_8/openjdk-distribution-selection
Chapter 38. File language
Chapter 38. File language The File Expression Language is an extension to the language, adding file related capabilities. These capabilities are related to common use cases working with file path and names. The goal is to allow expressions to be used with the components for setting dynamic file patterns for both consumer and producer. Note The file language is merged with language which means you can use all the file syntax directly within the simple language. 38.1. Dependencies The File language is part of camel-core . When using file with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency> 38.2. File Language options The File language supports 2 options, which are listed below. Name Default Java Type Description resultType String Sets the class name of the result type (type from output). trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 38.3. Syntax This language is an extension to the language so the syntax applies also. So the table below only lists the additional file related functions. All the file tokens use the same expression name as the method on the java.io.File object, for instance file:absolute refers to the java.io.File.getAbsolute() method. Notice that not all expressions are supported by the current Exchange. For instance the component supports some options, whereas the File component supports all of them. Expression Type File Consumer File Producer FTP Consumer FTP Producer Description file:name String yes no yes no refers to the file name (is relative to the starting directory, see note below) file:name.ext String yes no yes no refers to the file extension only file:name.ext.single String yes no yes no refers to the file extension. If the file extension has multiple dots, then this expression strips and only returns the last part. file:name.noext String yes no yes no refers to the file name with no extension (is relative to the starting directory, see note below) file:name.noext.single String yes no yes no refers to the file name with no extension (is relative to the starting directory, see note below). If the file extension has multiple dots, then this expression strips only the last part, and keep the others. file:onlyname String yes no yes no refers to the file name only with no leading paths. file:onlyname.noext String yes no yes no refers to the file name only with no extension and with no leading paths. file:onlyname.noext.single String yes no yes no refers to the file name only with no extension and with no leading paths. If the file extension has multiple dots, then this expression strips only the last part, and keep the others. file:ext String yes no yes no refers to the file extension only file:parent String yes no yes no refers to the file parent file:path String yes no yes no refers to the file path file:absolute Boolean yes no no no refers to whether the file is regarded as absolute or relative file:absolute.path String yes no no no refers to the absolute file path file:length Long yes no yes no refers to the file length returned as a Long type file:size Long yes no yes no refers to the file length returned as a Long type file:modified Date yes no yes no Refers to the file last modified returned as a Date type date:_command:pattern_ String yes yes yes yes for date formatting using the java.text.SimpleDateFormat patterns. Is an extension to the language. Additional command is: file (consumers only) for the last modified timestamp of the file. Notice: all the commands from the language can also be used. 38.4. File token example 38.4.1. Relative paths We have a java.io.File handle for the file hello.txt in the following relative directory: .\filelanguage\test . And we configure our endpoint to use this starting directory .\filelanguage . The file tokens will return as: Expression Returns file:name test\hello.txt file:name.ext txt file:name.noext test\hello file:onlyname hello.txt file:onlyname.noext hello file:ext txt file:parent filelanguage\test file:path filelanguage\test\hello.txt file:absolute false file:absolute.path \workspace\camel\camel-core\target\filelanguage\test\hello.txt 38.4.2. Absolute paths We have a java.io.File handle for the file hello.txt in the following absolute directory: \workspace\camel\camel-core\target\filelanguage\test . And we configure out endpoint to use the absolute starting directory \workspace\camel\camel-core\target\filelanguage . The file tokens will return as: Expression Returns file:name test\hello.txt file:name.ext txt file:name.noext test\hello file:onlyname hello.txt file:onlyname.noext hello file:ext txt file:parent \workspace\camel\camel-core\target\filelanguage\test file:path \workspace\camel\camel-core\target\filelanguage\test\hello.txt file:absolute true file:absolute.path \workspace\camel\camel-core\target\filelanguage\test\hello.txt 38.5. Samples You can enter a fixed file name such as myfile.txt : fileName="myfile.txt" Let's assume we use the file consumer to read files and want to move the read files to back up folder with the current date as a sub folder. This can be done using an expression like: fileName="backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak" relative folder names are also supported so suppose the backup folder should be a sibling folder then you can append .. as shown: fileName="../backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak" As this is an extension to the language we have access to all the goodies from this language also, so in this use case we want to use the in.header.type as a parameter in the dynamic expression: fileName="../backup/USD{date:now:yyyyMMdd}/type-USD{in.header.type}/backup-of-USD{file:name.noext}.bak" If you have a custom date you want to use in the expression then Camel supports retrieving dates from the message header: fileName="orders/order-USD{in.header.customerId}-USD{date:in.header.orderDate:yyyyMMdd}.xml" And finally we can also use a bean expression to invoke a POJO class that generates some String output (or convertible to String) to be used: fileName="uniquefile-USD{bean:myguidgenerator.generateid}.txt" Of course all this can be combined in one expression where you can use the and the language in one combined expression. This is pretty powerful for those common file path patterns. 38.6. Spring Boot Auto-Configuration The component supports 147 options, which are listed below. Name Description Default Type camel.cloud.consul.service-discovery.acl-token Sets the ACL token to be used with Consul. String camel.cloud.consul.service-discovery.block-seconds The seconds to wait for a watch event, default 10 seconds. 10 Integer camel.cloud.consul.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.consul.service-discovery.connect-timeout-millis Connect timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.datacenter The data center. String camel.cloud.consul.service-discovery.enabled Enable the component. true Boolean camel.cloud.consul.service-discovery.password Sets the password to be used for basic authentication. String camel.cloud.consul.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.consul.service-discovery.read-timeout-millis Read timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.url The Consul agent URL. String camel.cloud.consul.service-discovery.user-name Sets the username to be used for basic authentication. String camel.cloud.consul.service-discovery.write-timeout-millis Write timeout for OkHttpClient. Long camel.cloud.dns.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.dns.service-discovery.domain The domain name;. String camel.cloud.dns.service-discovery.enabled Enable the component. true Boolean camel.cloud.dns.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.dns.service-discovery.proto The transport protocol of the desired service. _tcp String camel.cloud.etcd.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.etcd.service-discovery.enabled Enable the component. true Boolean camel.cloud.etcd.service-discovery.password The password to use for basic authentication. String camel.cloud.etcd.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.etcd.service-discovery.service-path The path to look for for service discovery. /services/ String camel.cloud.etcd.service-discovery.timeout To set the maximum time an action could take to complete. Long camel.cloud.etcd.service-discovery.type To set the discovery type, valid values are on-demand and watch. on-demand String camel.cloud.etcd.service-discovery.uris The URIs the client can connect to. String camel.cloud.etcd.service-discovery.user-name The user name to use for basic authentication. String camel.cloud.kubernetes.service-discovery.api-version Sets the API version when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-data Sets the Certificate Authority data when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-file Sets the Certificate Authority data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-data Sets the Client Certificate data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-file Sets the Client Certificate data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-algo Sets the Client Keystore algorithm, such as RSA when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-data Sets the Client Keystore data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-file Sets the Client Keystore data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-passphrase Sets the Client Keystore passphrase when using client lookup. String camel.cloud.kubernetes.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.kubernetes.service-discovery.dns-domain Sets the DNS domain to use for DNS lookup. String camel.cloud.kubernetes.service-discovery.enabled Enable the component. true Boolean camel.cloud.kubernetes.service-discovery.lookup How to perform service lookup. Possible values: client, dns, environment. When using client, then the client queries the kubernetes master to obtain a list of active pods that provides the service, and then random (or round robin) select a pod. When using dns the service name is resolved as name.namespace.svc.dnsDomain. When using dnssrv the service name is resolved with SRV query for . ... svc... When using environment then environment variables are used to lookup the service. By default environment is used. environment String camel.cloud.kubernetes.service-discovery.master-url Sets the URL to the master when using client lookup. String camel.cloud.kubernetes.service-discovery.namespace Sets the namespace to use. Will by default use namespace from the ENV variable KUBERNETES_MASTER. String camel.cloud.kubernetes.service-discovery.oauth-token Sets the OAUTH token for authentication (instead of username/password) when using client lookup. String camel.cloud.kubernetes.service-discovery.password Sets the password for authentication when using client lookup. String camel.cloud.kubernetes.service-discovery.port-name Sets the Port Name to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.port-protocol Sets the Port Protocol to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.kubernetes.service-discovery.trust-certs Sets whether to turn on trust certificate check when using client lookup. false Boolean camel.cloud.kubernetes.service-discovery.username Sets the username for authentication when using client lookup. String camel.cloud.ribbon.load-balancer.client-name Sets the Ribbon client name. String camel.cloud.ribbon.load-balancer.configurations Define additional configuration definitions. Map camel.cloud.ribbon.load-balancer.enabled Enable the component. true Boolean camel.cloud.ribbon.load-balancer.namespace The namespace. String camel.cloud.ribbon.load-balancer.password The password. String camel.cloud.ribbon.load-balancer.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.ribbon.load-balancer.username The username. String camel.hystrix.allow-maximum-size-to-diverge-from-core-size Allows the configuration for maximumSize to take effect. That value can then be equal to, or higher, than coreSize. false Boolean camel.hystrix.circuit-breaker-enabled Whether to use a HystrixCircuitBreaker or not. If false no circuit-breaker logic will be used and all requests permitted. This is similar in effect to circuitBreakerForceClosed() except that continues tracking metrics and knowing whether it should be open/closed, this property results in not even instantiating a circuit-breaker. true Boolean camel.hystrix.circuit-breaker-error-threshold-percentage Error percentage threshold (as whole number such as 50) at which point the circuit breaker will trip open and reject requests. It will stay tripped for the duration defined in circuitBreakerSleepWindowInMilliseconds; The error percentage this is compared against comes from HystrixCommandMetrics.getHealthCounts(). 50 Integer camel.hystrix.circuit-breaker-force-closed If true the HystrixCircuitBreaker#allowRequest() will always return true to allow requests regardless of the error percentage from HystrixCommandMetrics.getHealthCounts(). The circuitBreakerForceOpen() property takes precedence so if it set to true this property does nothing. false Boolean camel.hystrix.circuit-breaker-force-open If true the HystrixCircuitBreaker.allowRequest() will always return false, causing the circuit to be open (tripped) and reject all requests. This property takes precedence over circuitBreakerForceClosed();. false Boolean camel.hystrix.circuit-breaker-request-volume-threshold Minimum number of requests in the metricsRollingStatisticalWindowInMilliseconds() that must exist before the HystrixCircuitBreaker will trip. If below this number the circuit will not trip regardless of error percentage. 20 Integer camel.hystrix.circuit-breaker-sleep-window-in-milliseconds The time in milliseconds after a HystrixCircuitBreaker trips open that it should wait before trying requests again. 5000 Integer camel.hystrix.configurations Define additional configuration definitions. Map camel.hystrix.core-pool-size Core thread-pool size that gets passed to java.util.concurrent.ThreadPoolExecutor#setCorePoolSize(int). 10 Integer camel.hystrix.enabled Enable the component. true Boolean camel.hystrix.execution-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.run(). Requests beyond the concurrent limit will be rejected. Applicable only when executionIsolationStrategy == SEMAPHORE. 20 Integer camel.hystrix.execution-isolation-strategy What isolation strategy HystrixCommand.run() will be executed with. If THREAD then it will be executed on a separate thread and concurrent requests limited by the number of threads in the thread-pool. If SEMAPHORE then it will be executed on the calling thread and concurrent requests limited by the semaphore count. THREAD String camel.hystrix.execution-isolation-thread-interrupt-on-timeout Whether the execution thread should attempt an interrupt (using Future#cancel ) when a thread times out. Applicable only when executionIsolationStrategy() == THREAD. true Boolean camel.hystrix.execution-timeout-enabled Whether the timeout mechanism is enabled for this command. true Boolean camel.hystrix.execution-timeout-in-milliseconds Time in milliseconds at which point the command will timeout and halt execution. If executionIsolationThreadInterruptOnTimeout == true and the command is thread-isolated, the executing thread will be interrupted. If the command is semaphore-isolated and a HystrixObservableCommand, that command will get unsubscribed. 1000 Integer camel.hystrix.fallback-enabled Whether HystrixCommand.getFallback() should be attempted when failure occurs. true Boolean camel.hystrix.fallback-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.getFallback(). Requests beyond the concurrent limit will fail-fast and not attempt retrieving a fallback. 10 Integer camel.hystrix.group-key Sets the group key to use. The default value is CamelHystrix. CamelHystrix String camel.hystrix.keep-alive-time Keep-alive time in minutes that gets passed to ThreadPoolExecutor#setKeepAliveTime(long,TimeUnit). 1 Integer camel.hystrix.max-queue-size Max queue size that gets passed to BlockingQueue in HystrixConcurrencyStrategy.getBlockingQueue(int) This should only affect the instantiation of a threadpool - it is not eliglible to change a queue size on the fly. For that, use queueSizeRejectionThreshold(). -1 Integer camel.hystrix.maximum-size Maximum thread-pool size that gets passed to ThreadPoolExecutor#setMaximumPoolSize(int) . This is the maximum amount of concurrency that can be supported without starting to reject HystrixCommands. Please note that this setting only takes effect if you also set allowMaximumSizeToDivergeFromCoreSize. 10 Integer camel.hystrix.metrics-health-snapshot-interval-in-milliseconds Time in milliseconds to wait between allowing health snapshots to be taken that calculate success and error percentages and affect HystrixCircuitBreaker.isOpen() status. On high-volume circuits the continual calculation of error percentage can become CPU intensive thus this controls how often it is calculated. 500 Integer camel.hystrix.metrics-rolling-percentile-bucket-size Maximum number of values stored in each bucket of the rolling percentile. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-percentile-enabled Whether percentile metrics should be captured using HystrixRollingPercentile inside HystrixCommandMetrics. true Boolean camel.hystrix.metrics-rolling-percentile-window-buckets Number of buckets the rolling percentile window is broken into. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 6 Integer camel.hystrix.metrics-rolling-percentile-window-in-milliseconds Duration of percentile rolling window in milliseconds. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10000 Integer camel.hystrix.metrics-rolling-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-statistical-window-in-milliseconds This property sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. The window is divided into buckets and rolls by those increments. 10000 Integer camel.hystrix.queue-size-rejection-threshold Queue size rejection threshold is an artificial max size at which rejections will occur even if maxQueueSize has not been reached. This is done because the maxQueueSize of a BlockingQueue can not be dynamically changed and we want to support dynamically changing the queue size that affects rejections. This is used by HystrixCommand when queuing a thread for execution. 5 Integer camel.hystrix.request-log-enabled Whether HystrixCommand execution and events should be logged to HystrixRequestLog. true Boolean camel.hystrix.thread-pool-key Sets the thread pool key to use. Will by default use the same value as groupKey has been configured to use. CamelHystrix String camel.hystrix.thread-pool-rolling-number-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10 Integer camel.hystrix.thread-pool-rolling-number-statistical-window-in-milliseconds Duration of statistical rolling window in milliseconds. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10000 Integer camel.language.constant.enabled Whether to enable auto configuration of the constant language. This is enabled by default. Boolean camel.language.constant.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.csimple.enabled Whether to enable auto configuration of the csimple language. This is enabled by default. Boolean camel.language.csimple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.exchangeproperty.enabled Whether to enable auto configuration of the exchangeProperty language. This is enabled by default. Boolean camel.language.exchangeproperty.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.file.enabled Whether to enable auto configuration of the file language. This is enabled by default. Boolean camel.language.file.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.header.enabled Whether to enable auto configuration of the header language. This is enabled by default. Boolean camel.language.header.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.ref.enabled Whether to enable auto configuration of the ref language. This is enabled by default. Boolean camel.language.ref.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.simple.enabled Whether to enable auto configuration of the simple language. This is enabled by default. Boolean camel.language.simple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.tokenize.enabled Whether to enable auto configuration of the tokenize language. This is enabled by default. Boolean camel.language.tokenize.group-delimiter Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. String camel.language.tokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.resilience4j.automatic-transition-from-open-to-half-open-enabled Enables automatic transition from OPEN to HALF_OPEN state once the waitDurationInOpenState has passed. false Boolean camel.resilience4j.circuit-breaker-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup and use from the registry. When using this, then any other circuit breaker options are not in use. String camel.resilience4j.config-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to lookup and use from the registry. String camel.resilience4j.configurations Define additional configuration definitions. Map camel.resilience4j.enabled Enable the component. true Boolean camel.resilience4j.failure-rate-threshold Configures the failure rate threshold in percentage. If the failure rate is equal or greater than the threshold the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 50 percentage. Float camel.resilience4j.minimum-number-of-calls Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. For example, if minimumNumberOfCalls is 10, then at least 10 calls must be recorded, before the failure rate can be calculated. If only 9 calls have been recorded the CircuitBreaker will not transition to open even if all 9 calls have failed. Default minimumNumberOfCalls is 100. 100 Integer camel.resilience4j.permitted-number-of-calls-in-half-open-state Configures the number of permitted calls when the CircuitBreaker is half open. The size must be greater than 0. Default size is 10. 10 Integer camel.resilience4j.sliding-window-size Configures the size of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. slidingWindowSize configures the size of the sliding window. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. The slidingWindowSize must be greater than 0. The minimumNumberOfCalls must be greater than 0. If the slidingWindowType is COUNT_BASED, the minimumNumberOfCalls cannot be greater than slidingWindowSize . If the slidingWindowType is TIME_BASED, you can pick whatever you want. Default slidingWindowSize is 100. 100 Integer camel.resilience4j.sliding-window-type Configures the type of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. Default slidingWindowType is COUNT_BASED. COUNT_BASED String camel.resilience4j.slow-call-duration-threshold Configures the duration threshold (seconds) above which calls are considered as slow and increase the slow calls percentage. Default value is 60 seconds. 60 Integer camel.resilience4j.slow-call-rate-threshold Configures a threshold in percentage. The CircuitBreaker considers a call as slow when the call duration is greater than slowCallDurationThreshold Duration. When the percentage of slow calls is equal or greater the threshold, the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 100 percentage which means that all recorded calls must be slower than slowCallDurationThreshold. Float camel.resilience4j.wait-duration-in-open-state Configures the wait duration (in seconds) which specifies how long the CircuitBreaker should stay open, before it switches to half open. Default value is 60 seconds. 60 Integer camel.resilience4j.writable-stack-trace-enabled Enables writable stack traces. When set to false, Exception.getStackTrace returns a zero length array. This may be used to reduce log spam when the circuit breaker is open as the cause of the exceptions is already known (the circuit breaker is short-circuiting calls). true Boolean camel.rest.api-component The name of the Camel component to use as the REST API (such as swagger) If no API Component has been explicit configured, then Camel will lookup if there is a Camel component responsible for servicing and generating the REST API documentation, or if a org.apache.camel.spi.RestApiProcessorFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.api-context-path Sets a leading API context-path the REST API services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. String camel.rest.api-context-route-id Sets the route id to use for the route that services the REST API. The route will by default use an auto assigned route id. String camel.rest.api-host To use an specific hostname for the API documentation (eg swagger) This can be used to override the generated host with this configured hostname. String camel.rest.api-property Allows to configure as many additional properties for the api documentation (swagger). For example set property api.title to my cool stuff. Map camel.rest.api-vendor-extension Whether vendor extension is enabled in the Rest APIs. If enabled then Camel will include additional information as vendor extension (eg keys starting with x-) such as route ids, class names etc. Not all 3rd party API gateways and tools supports vendor-extensions when importing your API docs. false Boolean camel.rest.binding-mode Sets the binding mode to use. The default value is off. RestBindingMode camel.rest.client-request-validation Whether to enable validation of the client request to check whether the Content-Type and Accept headers from the client is supported by the Rest-DSL configuration of its consumes/produces settings. This can be turned on, to enable this check. In case of validation error, then HTTP Status codes 415 or 406 is returned. The default value is false. false Boolean camel.rest.component The Camel Rest component to use for the REST transport (consumer), such as netty-http, jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.component-property Allows to configure as many additional properties for the rest component in use. Map camel.rest.consumer-property Allows to configure as many additional properties for the rest consumer in use. Map camel.rest.context-path Sets a leading context-path the REST services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. Or for components such as camel-jetty or camel-netty-http that includes a HTTP server. String camel.rest.cors-headers Allows to configure custom CORS headers. Map camel.rest.data-format-property Allows to configure as many additional properties for the data formats in use. For example set property prettyPrint to true to have json outputted in pretty mode. The properties can be prefixed to denote the option is only for either JSON or XML and for either the IN or the OUT. The prefixes are: json.in. json.out. xml.in. xml.out. For example a key with value xml.out.mustBeJAXBElement is only for the XML data format for the outgoing. A key without a prefix is a common key for all situations. Map camel.rest.enable-cors Whether to enable CORS headers in the HTTP response. The default value is false. false Boolean camel.rest.endpoint-property Allows to configure as many additional properties for the rest endpoint in use. Map camel.rest.host The hostname to use for exposing the REST service. String camel.rest.host-name-resolver If no hostname has been explicit configured, then this resolver is used to compute the hostname the REST service will be using. RestHostNameResolver camel.rest.json-data-format Name of specific json data format to use. By default json-jackson will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.port The port number to use for exposing the REST service. Notice if you use servlet component then the port number configured here does not apply, as the port number in use is the actual port number the servlet component is using. eg if using Apache Tomcat its the tomcat http port, if using Apache Karaf its the HTTP service in Karaf that uses port 8181 by default etc. Though in those situations setting the port number here, allows tooling and JMX to know the port number, so its recommended to set the port number to the number that the servlet engine uses. String camel.rest.producer-api-doc Sets the location of the api document (swagger api) the REST producer will use to validate the REST uri and query parameters are valid accordingly to the api document. This requires adding camel-swagger-java to the classpath, and any miss configuration will let Camel fail on startup and report the error(s). The location of the api document is loaded from classpath by default, but you can use file: or http: to refer to resources to load from file or http url. String camel.rest.producer-component Sets the name of the Camel component to use as the REST producer. String camel.rest.scheme The scheme to use for exposing the REST service. Usually http or https is supported. The default value is http. String camel.rest.skip-binding-on-error-code Whether to skip binding on output if there is a custom HTTP error code header. This allows to build custom error messages that do not bind to json / xml etc, as success messages otherwise will do. false Boolean camel.rest.use-x-forward-headers Whether to use X-Forward headers for Host and related setting. The default value is true. true Boolean camel.rest.xml-data-format Name of specific XML data format to use. By default jaxb will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.api-context-id-pattern Deprecated Sets an CamelContext id pattern to only allow Rest APIs from rest services within CamelContext's which name matches the pattern. The pattern name refers to the CamelContext name, to match on the current CamelContext only. For any other value, the pattern uses the rules from PatternHelper#matchPattern(String,String). String camel.rest.api-context-listing Deprecated Sets whether listing of all available CamelContext's with REST services in the JVM is enabled. If enabled it allows to discover these contexts, if false then only the current CamelContext is in use. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency>", "fileName=\"myfile.txt\"", "fileName=\"backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak\"", "fileName=\"../backup/USD{date:now:yyyyMMdd}/USD{file:name.noext}.bak\"", "fileName=\"../backup/USD{date:now:yyyyMMdd}/type-USD{in.header.type}/backup-of-USD{file:name.noext}.bak\"", "fileName=\"orders/order-USD{in.header.customerId}-USD{date:in.header.orderDate:yyyyMMdd}.xml\"", "fileName=\"uniquefile-USD{bean:myguidgenerator.generateid}.txt\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-file-language-starter
2.2.11. Reverse Path Forwarding
2.2.11. Reverse Path Forwarding Reverse Path Forwarding is used to prevent packets that arrived via one interface from leaving via a different interface. When outgoing routes and incoming routes are different, it is sometimes referred to as asymmetric routing . Routers often route packets this way, but most hosts should not need to do this. Exceptions are such applications that involve sending traffic out over one link and receiving traffic over another link from a different service provider. For example, using leased lines in combination with xDSL or satellite links with 3G modems. If such a scenario is applicable to you, then turning off reverse path forwarding on the incoming interface is necessary. In short, unless you know that it is required, it is best enabled as it prevents users spoofing IP addresses from local subnets and reduces the opportunity for DDoS attacks. Note Red Hat Enterprise Linux 6 (unlike Red Hat Enterprise Linux 5) defaults to using Strict Reverse Path Forwarding . Red Hat Enterprise Linux 6 follows the Strict Reverse Path recommendation from RFC 3704, Ingress Filtering for Multihomed Networks. This currently only applies to IPv4 in Red Hat Enterprise Linux 6. Warning If forwarding is enabled, then Reverse Path Forwarding should only be disabled if there are other means for source-address validation (such as iptables rules for example). rp_filter Reverse Path Forwarding is enabled by means of the rp_filter directive. The rp_filter option is used to direct the kernel to select from one of three modes. It takes the following form when setting the default behavior: where INTEGER is one of the following: 0 - No source validation. 1 - Strict mode as defined in RFC 3704. 2 - Loose mode as defined in RFC 3704. The setting can be overridden per network interface using net.ipv4. interface .rp_filter . To make these settings persistent across reboot, modify the /etc/sysctl.conf file. 2.2.11.1. Additional Resources The following are resources that explain more about Reverse Path Forwarding. Installed Documentation usr/share/doc/kernel-doc- version /Documentation/networking/ip-sysctl.txt - This file contains a complete list of files and options available in the /proc/sys/net/ipv4/ directory. Useful Websites https://access.redhat.com/knowledge/solutions/53031 - The Red Hat Knowledgebase article about rp_filter . See RFC 3704 for an explanation of Ingress Filtering for Multihomed Networks.
[ "~]# /sbin/sysctl -w net.ipv4.conf.default.rp_filter= INTEGER" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-server_security-reverse_path_forwarding
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/apache_http_server_connectors_and_load_balancing_guide/making-open-source-more-inclusive_http-connectors-lb-guide
Jenkins
Jenkins OpenShift Container Platform 4.17 Jenkins Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/jenkins/index
Chapter 12. Installing a cluster into a shared VPC on GCP using Deployment Manager templates
Chapter 12. Installing a cluster into a shared VPC on GCP using Deployment Manager templates In OpenShift Container Platform version 4.13, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP) that uses infrastructure that you provide. In this context, a cluster installed into a shared VPC is a cluster that is configured to use a VPC from a project different from where the cluster is being deployed. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IPs from that network. For more information about shared VPC, see Shared VPC overview in the GCP documentation. The steps for performing a user-provided infrastructure installation into a shared VPC are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 12.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Note Be sure to also review this site list if you are configuring a proxy. 12.2. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 12.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 12.4. Configuring the GCP project that hosts your cluster Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 12.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 12.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 12.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 12.2. Optional API services API service Console service name Cloud Deployment Manager V2 API deploymentmanager.googleapis.com Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 12.4.3. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 12.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 12.4.4. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. You must have a service account key or a virtual machine with an attached service account to create the cluster. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 12.4.4.1. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create a service account with the following permissions. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin IAM Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using passthrough credentials mode Compute Load Balancer Admin IAM Role Viewer Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The roles are applied to the service accounts that the control plane and compute machines use: Table 12.4. GCP service account permissions Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 12.4.5. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 12.4.6. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 12.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 12.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 12.5. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 12.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.6. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 12.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 12.1. Machine series A2 A3 C2 C2D C3 C3D C4 E2 M1 N1 N2 N2D N4 Tau T2D 12.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 12.6. Configuring the GCP project that hosts your shared VPC network If you use a shared Virtual Private Cloud (VPC) to host your OpenShift Container Platform cluster in Google Cloud Platform (GCP), you must configure the project that hosts it. Note If you already have a project that hosts the shared VPC network, review this section to ensure that the project meets all of the requirements to install an OpenShift Container Platform cluster. Procedure Create a project to host the shared VPC for your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Create a service account in the project that hosts your shared VPC. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. The service account for the project that hosts the shared VPC network requires the following roles: Compute Network User Compute Security Admin Deployment Manager Editor DNS Administrator Security Admin Network Management Admin 12.6.1. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the project that hosts the shared VPC that you install the cluster into. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 12.6.2. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Export the following variables required by the resource definition: Export the control plane CIDR: USD export MASTER_SUBNET_CIDR='10.0.0.0/17' Export the compute CIDR: USD export WORKER_SUBNET_CIDR='10.0.128.0/17' Export the region to deploy the VPC network and cluster to: USD export REGION='<region>' Export the variable for the ID of the project that hosts the shared VPC: USD export HOST_PROJECT=<host_project> Export the variable for the email of the service account that belongs to host project: USD export HOST_PROJECT_ACCOUNT=<host_service_account_email> Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: '<prefix>' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the prefix of the network name. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create <vpc_deployment_name> --config 01_vpc.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 1 1 For <vpc_deployment_name> , specify the name of the VPC to deploy. Export the VPC variable that other components require: Export the name of the host project network: USD export HOST_PROJECT_NETWORK=<vpc_network> Export the name of the host project control plane subnet: USD export HOST_PROJECT_CONTROL_SUBNET=<control_plane_subnet> Export the name of the host project compute subnet: USD export HOST_PROJECT_COMPUTE_SUBNET=<compute_subnet> Set up the shared VPC. See Setting up Shared VPC in the GCP documentation. 12.6.2.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 12.2. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 12.7. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 12.7.1. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 12.7.2. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 12.7.3. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Important Confidential Computing is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important Due to a known issue in OpenShift Container Platform 4.13.3 and earlier versions, you cannot use persistent volume storage on a cluster with Confidential VMs on Google Cloud Platform (GCP). This issue was resolved in OpenShift Container Platform 4.13.4. For more information, see OCPBUGS-11768 . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 12.7.4. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 5 - control-plane-tag1 - control-plane-tag2 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 8 - compute-tag1 - compute-tag2 replicas: 0 metadata: name: test-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: gcp: defaultMachinePlatform: tags: 10 - global-tag1 - global-tag2 projectID: openshift-production 11 region: us-central1 12 pullSecret: '{"auths": ...}' fips: false 13 sshKey: ssh-ed25519 AAAA... 14 publish: Internal 15 1 Specify the public DNS on the host project. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 8 10 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter applies to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 9 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 11 Specify the main project where the VM instances reside. 12 Specify the region that your VPC network is in. 13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 14 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 15 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . To use a shared VPC in a cluster that uses infrastructure that you provision, you must set publish to Internal . The installation program will no longer be able to access the public DNS zone for the base domain in the host project. 12.7.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.7.6. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Remove the privateZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone status: {} 1 Remove this section completely. Configure the cloud provider for your VPC. Open the <installation_directory>/manifests/cloud-provider-config.yaml file. Add the network-project-id parameter and set its value to the ID of project that hosts the shared VPC network. Add the network-name parameter and set its value to the name of the shared VPC network that hosts the OpenShift Container Platform cluster. Replace the value of the subnetwork-name parameter with the value of the shared VPC subnet that hosts your compute machines. The contents of the <installation_directory>/manifests/cloud-provider-config.yaml resemble the following example: config: |+ [global] project-id = example-project regional = true multizone = true node-tags = opensh-ptzzx-master node-tags = opensh-ptzzx-worker node-instance-prefix = opensh-ptzzx external-instance-groups-prefix = opensh-ptzzx network-project-id = example-shared-vpc network-name = example-network subnetwork-name = example-worker-subnet If you deploy a cluster that is not on a private network, open the <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml file and replace the value of the scope parameter with External . The contents of the file resemble the following example: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService status: availableReplicas: 0 domain: '' selector: '' To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 12.8. Exporting common variables 12.8.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 12.8.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Generate the Ignition config files for your cluster. Install the jq package. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' 1 USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' 2 USD export NETWORK_CIDR='10.0.0.0/16' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 3 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` 1 2 Supply the values for the host project. 3 For <installation_directory> , specify the path to the directory that you stored the installation files in. 12.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 12.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 12.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 12.7. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 12.8. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 12.9. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 12.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{HOST_PROJECT_NETWORK} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_CONTROL_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 12.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 12.3. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 12.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 12.4. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 12.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 12.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 12.5. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 12.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 12.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 12.6. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 12.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Assign the permissions that the installation program requires to the service accounts for the subnets that host the control plane and compute subnets: Grant the networkViewer role of the project that hosts your shared VPC to the master service account: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} projects add-iam-policy-binding USD{HOST_PROJECT} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkViewer" Grant the networkUser role to the master service account for the control plane subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} Grant the networkUser role to the worker service account for the control plane subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} Grant the networkUser role to the master service account for the compute subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} Grant the networkUser role to the worker service account for the compute subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 12.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 12.7. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 12.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 12.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Ensure pyOpenSSL is installed. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 12.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 12.8. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 12.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 12.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 12.9. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 12.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 12.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_COMPUTE_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 12.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 12.10. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 12.19. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 12.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 12.22. Adding the ingress DNS records DNS zone configuration is removed when creating Kubernetes manifests and generating Ignition configs. You must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Configure a GCP account. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Create the worker machines. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 12.23. Adding ingress firewall rules The cluster requires several firewall rules. If you do not use a shared VPC, these rules are created by the Ingress Controller via the GCP cloud provider. When you use a shared VPC, you can either create cluster-wide firewall rules for all services now or create each rule based on events, when the cluster requests access. By creating each rule when the cluster requests access, you know exactly which firewall rules are required. By creating cluster-wide firewall rules, you can apply the same rule set across multiple clusters. If you choose to create each rule based on events, you must create firewall rules after you provision the cluster and during the life of the cluster when the console notifies you that rules are missing. Events that are similar to the following event are displayed, and you must add the firewall rules that are required: USD oc get events -n openshift-ingress --field-selector="reason=LoadBalancerManualChange" Example output Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description "{\"kubernetes.io/service-name\":\"openshift-ingress/router-default\", \"kubernetes.io/service-ip\":\"35.237.236.234\"}\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project` If you encounter issues when creating these rule-based events, you can configure the cluster-wide firewall rules while your cluster is running. 12.23.1. Creating cluster-wide firewall rules for a shared VPC in GCP You can create cluster-wide firewall rules to allow the access that the OpenShift Container Platform cluster requires. Warning If you do not choose to create firewall rules based on cluster events, you must create cluster-wide firewall rules. Prerequisites You exported the variables that the Deployment Manager templates require to deploy your cluster. You created the networking and load balancing components in GCP that your cluster requires. Procedure Add a single firewall rule to allow the Google Cloud Engine health checks to access all of the services. This rule enables the ingress load balancers to determine the health status of their instances. USD gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network="USD{CLUSTER_NETWORK}" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags="USD{INFRA_ID}-master,USD{INFRA_ID}-worker" USD{INFRA_ID}-ingress-hc --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} Add a single firewall rule to allow access to all cluster services: For an external cluster: USD gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="USD{CLUSTER_NETWORK}" --source-ranges="0.0.0.0/0" --target-tags="USD{INFRA_ID}-master,USD{INFRA_ID}-worker" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} For a private cluster: USD gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="USD{CLUSTER_NETWORK}" --source-ranges=USD{NETWORK_CIDR} --target-tags="USD{INFRA_ID}-master,USD{INFRA_ID}-worker" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} Because this rule only allows traffic on TCP ports 80 and 443 , ensure that you add all the ports that your services use. 12.24. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned GCP infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 12.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 12.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "export MASTER_SUBNET_CIDR='10.0.0.0/17'", "export WORKER_SUBNET_CIDR='10.0.128.0/17'", "export REGION='<region>'", "export HOST_PROJECT=<host_project>", "export HOST_PROJECT_ACCOUNT=<host_service_account_email>", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: '<prefix>' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create <vpc_deployment_name> --config 01_vpc.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 1", "export HOST_PROJECT_NETWORK=<vpc_network>", "export HOST_PROJECT_CONTROL_SUBNET=<control_plane_subnet>", "export HOST_PROJECT_COMPUTE_SUBNET=<compute_subnet>", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "mkdir <installation_directory>", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 5 - control-plane-tag1 - control-plane-tag2 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c tags: 8 - compute-tag1 - compute-tag2 replicas: 0 metadata: name: test-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: gcp: defaultMachinePlatform: tags: 10 - global-tag1 - global-tag2 projectID: openshift-production 11 region: us-central1 12 pullSecret: '{\"auths\": ...}' fips: false 13 sshKey: ssh-ed25519 AAAA... 14 publish: Internal 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone status: {}", "config: |+ [global] project-id = example-project regional = true multizone = true node-tags = opensh-ptzzx-master node-tags = opensh-ptzzx-worker node-instance-prefix = opensh-ptzzx external-instance-groups-prefix = opensh-ptzzx network-project-id = example-shared-vpc network-name = example-network subnetwork-name = example-worker-subnet", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService status: availableReplicas: 0 domain: '' selector: ''", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' 1 export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' 2 export NETWORK_CIDR='10.0.0.0/16' export KUBECONFIG=<installation_directory>/auth/kubeconfig 3 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json`", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{HOST_PROJECT_NETWORK} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_CONTROL_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} projects add-iam-policy-binding USD{HOST_PROJECT} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkViewer\"", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition", "gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign", "gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition", "gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_COMPUTE_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "oc get events -n openshift-ingress --field-selector=\"reason=LoadBalancerManualChange\"", "Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description \"{\\\"kubernetes.io/service-name\\\":\\\"openshift-ingress/router-default\\\", \\\"kubernetes.io/service-ip\\\":\\\"35.237.236.234\\\"}\\\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project`", "gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress-hc --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=\"0.0.0.0/0\" --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=USD{NETWORK_CIDR} --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_gcp/installing-gcp-user-infra-vpc
3.4. Virtualization
3.4. Virtualization virt-p2v component, BZ# 816930 Converting a physical server running either Red Hat Enterprise Linux 4 or Red Hat Enterprise Linux 5 which has its file system root on an MD device is not supported. Converting such a guest results in a guest which fails to boot. Note that conversion of a Red Hat Enterprise Linux 6 server which has its root on an MD device is supported. virt-p2v component, BZ# 808820 When converting a physical host with a multipath storage, Virt-P2V presents all available paths for conversion. Only a single path must be selected. This must be a currently active path. vdsm component, BZ# 826921 The following parameter has been deprecated in the /etc/vdsm/vdsm.conf file: This parameter will continue to be supported in versions 3.x, but will be removed in version 4.0 of NFS. Customers using this parameter should upgrade their domains to V2 and greater and set the parameters from the GUI. vdsm component, BZ# 749479 When adding a bond to an existing network, its world-visible MAC address may change. If the DHCP server is not aware that the new MAC address belongs to the same host as the old one, it may assign the host a different IP address, that is unknown to the DNS server nor to Red Hat Enterprise Virtualization Manager. As a result, Red Hat Enterprise Virtualization Manager VDSM connectivity is broken. To work around this issue, configure your DHCP server to assign the same IP for all the MAC addresses of slave NICs. Alternatively, when editing a management network, do not check connectivity, and make sure that Red Hat Enterprise Virtualization Manager / DNS use the newly-assigned IP address for the node. vdsm component Vdsm uses cgroups if they are available on the host. If the cgconfig service is turned off, turn it on with the chkconfig cgconfig on command and reboot. If you prefer not to reboot your system, restarting the libvirt service and vdsm should be sufficient. ovirt-node component, BZ# 747102 Upgrades from Beta to the GA version will result in an incorrect partitioning of the host. The GA version must be installed clean. UEFI machines must be set to legacy boot options for RHEV-H to boot successfully after installation. kernel component When a system boots from SAN, it starts the libvirtd service, which enables IP forwarding. The service causes a driver reset on both Ethernet ports which causes a loss of all paths to an OS disk. Under this condition, the system cannot load firmware files from the OS disk to initialize Ethernet ports, eventually never recovers paths to the OS disk, and fails to boot from SAN. To work around this issue add the bnx2x.disable_tpa=1 option to the kernel command line of the GRUB menu, or do not install virtualization related software and manually enable IP forwarding when needed. vdsm component If the /root/.ssh/ directory is missing from a host when it is added to a Red Hat Enterprise Virtualization Manager data center, the directory is created with a wrong SELinux context, and SSH'ing into the host is denied. To work around this issue, manually create the /root/.ssh directory with the correct SELinux context: vdsm component VDSM now configures libvirt so that connection to its local read-write UNIX domain socket is password-protected by SASL. The intention is to protect virtual machines from human errors of local host administrators. All operations that may change the state of virtual machines on a Red Hat Enterprise Virtualization-controlled host must be performed from Red Hat Enterprise Virtualization Manager. libvirt component In earlier versions of Red Hat Enterprise Linux, libvirt permitted PCI devices to be insecurely assigned to guests. In Red Hat Enterprise Linux 6, assignment of insecure devices is disabled by default by libvirt . However, this may cause assignment of previously working devices to start failing. To enable the old, insecure setting, edit the /etc/libvirt/qemu.conf file, set the relaxed_acs_check = 1 parameter, and restart libvirtd ( service libvirtd restart ). Note that this action will re-open possible security issues. virtio-win component, BZ# 615928 The balloon service on Windows 7 guests can only be started by the Administrator user. libvirt component, BZ# 622649 libvirt uses transient iptables rules for managing NAT or bridging to virtual machine guests. Any external command that reloads the iptables state (such as running system-config-firewall ) will overwrite the entries needed by libvirt . Consequently, after running any command or tool that changes the state of iptables , guests may lose access to the network. To work around this issue, use the service libvirt reload command to restore libvirt 's additional iptables rules. virtio-win component, BZ# 612801 A Windows virtual machine must be restarted after the installation of the kernel Windows driver framework. If the virtual machine is not restarted, it may crash when a memory balloon operation is performed. qemu-kvm component, BZ# 720597 Installation of Windows 7 Ultimate x86 (32-bit) Service Pack 1 on a guest with more than 4GB of RAM and more than one CPU from a DVD medium often crashes during the final steps of the installation process due to a system hang. To work around this issue, use the Windows Update utility to install the Service Pack. qemu-kvm component, BZ# 612788 A dual function Intel 82576 Gigabit Ethernet Controller interface (codename: Kawela, PCI Vendor/Device ID: 8086:10c9) cannot have both physical functions (PF's) device-assigned to a Windows 2008 guest. Either physical function can be device assigned to a Windows 2008 guest (PCI function 0 or function 1), but not both. virt-v2v component, BZ# 618091 The virt-v2v utility is able to convert guests running on an ESX server. However, if an ESX guest has a disk with a snapshot, the snapshot must be on the same datastore as the underlying disk storage. If the snapshot and the underlying storage are on different datastores, virt-v2v will report a 404 error while trying to retrieve the storage. virt-v2v component, BZ# 678232 The VMware Tools application on Microsoft Windows is unable to disable itself when it detects that it is no longer running on a VMware platform. Consequently, converting a Microsoft Windows guest from VMware ESX, which has VMware Tools installed, will result in errors. These errors usually manifest as error messages on start-up, and a "Stop Error" (also known as a BSOD) when shutting down the guest. To work around this issue, uninstall VMware Tools on Microsoft Windows guests prior to conversion.
[ "[irs] nfs_mount_options = soft,nosharecache,vers=3", "~]# mkdir /root/.ssh ~]# chmod 0700 /root/.ssh ~]# restorecon /root/.ssh" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/virtualization_issues
Chapter 9. KafkaListenerAuthenticationOAuth schema reference
Chapter 9. KafkaListenerAuthenticationOAuth schema reference Used in: GenericKafkaListener The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationOAuth type from KafkaListenerAuthenticationTls , KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationCustom . It must have the value oauth for the type KafkaListenerAuthenticationOAuth . Property Description accessTokenIsJwt Configure whether the access token is treated as JWT. This must be set to false if the authorization server returns opaque tokens. Defaults to true . boolean checkAccessTokenType Configure whether the access token type check is performed or not. This should be set to false if the authorization server does not include 'typ' claim in JWT token. Defaults to true . boolean checkAudience Enable or disable audience checking. Audience checks identify the recipients of tokens. If audience checking is enabled, the OAuth Client ID also has to be configured using the clientId property. The Kafka broker will reject tokens that do not have its clientId in their aud (audience) claim.Default value is false . boolean checkIssuer Enable or disable issuer checking. By default issuer is checked using the value configured by validIssuerUri . Default value is true . boolean clientAudience The audience to use when making requests to the authorization server's token endpoint. Used for inter-broker authentication and for configuring OAuth 2.0 over PLAIN using the clientId and secret method. string clientId OAuth Client ID which the Kafka broker can use to authenticate against the authorization server and use the introspect endpoint URI. string clientScope The scope to use when making requests to the authorization server's token endpoint. Used for inter-broker authentication and for configuring OAuth 2.0 over PLAIN using the clientId and secret method. string clientSecret Link to OpenShift Secret containing the OAuth client secret which the Kafka broker can use to authenticate against the authorization server and use the introspect endpoint URI. GenericSecretSource connectTimeoutSeconds The connect timeout in seconds when connecting to authorization server. If not set, the effective connect timeout is 60 seconds. integer customClaimCheck JsonPath filter query to be applied to the JWT token or to the response of the introspection endpoint for additional token validation. Not set by default. string disableTlsHostnameVerification Enable or disable TLS hostname verification. Default value is false . boolean enableECDSA The enableECDSA property has been deprecated. Enable or disable ECDSA support by installing BouncyCastle crypto provider. ECDSA support is always enabled. The BouncyCastle libraries are no longer packaged with AMQ Streams. Value is ignored. boolean enableMetrics Enable or disable OAuth metrics. Default value is false . boolean enableOauthBearer Enable or disable OAuth authentication over SASL_OAUTHBEARER. Default value is true . boolean enablePlain Enable or disable OAuth authentication over SASL_PLAIN. There is no re-authentication support when this mechanism is used. Default value is false . boolean failFast Enable or disable termination of Kafka broker processes due to potentially recoverable runtime errors during startup. Default value is true . boolean fallbackUserNameClaim The fallback username claim to be used for the user id if the claim specified by userNameClaim is not present. This is useful when client_credentials authentication only results in the client id being provided in another claim. It only takes effect if userNameClaim is set. string fallbackUserNamePrefix The prefix to use with the value of fallbackUserNameClaim to construct the user id. This only takes effect if fallbackUserNameClaim is true, and the value is present for the claim. Mapping usernames and client ids into the same user id space is useful in preventing name collisions. string groupsClaim JsonPath query used to extract groups for the user during authentication. Extracted groups can be used by a custom authorizer. By default no groups are extracted. string groupsClaimDelimiter A delimiter used to parse groups when they are extracted as a single String value rather than a JSON array. Default value is ',' (comma). string httpRetries The maximum number of retries to attempt if an initial HTTP request fails. If not set, the default is to not attempt any retries. integer httpRetryPauseMs The pause to take before retrying a failed HTTP request. If not set, the default is to not pause at all but to immediately repeat a request. integer introspectionEndpointUri URI of the token introspection endpoint which can be used to validate opaque non-JWT tokens. string jwksEndpointUri URI of the JWKS certificate endpoint, which can be used for local JWT validation. string jwksExpirySeconds Configures how often are the JWKS certificates considered valid. The expiry interval has to be at least 60 seconds longer then the refresh interval specified in jwksRefreshSeconds . Defaults to 360 seconds. integer jwksIgnoreKeyUse Flag to ignore the 'use' attribute of key declarations in a JWKS endpoint response. Default value is false . boolean jwksMinRefreshPauseSeconds The minimum pause between two consecutive refreshes. When an unknown signing key is encountered the refresh is scheduled immediately, but will always wait for this minimum pause. Defaults to 1 second. integer jwksRefreshSeconds Configures how often are the JWKS certificates refreshed. The refresh interval has to be at least 60 seconds shorter then the expiry interval specified in jwksExpirySeconds . Defaults to 300 seconds. integer maxSecondsWithoutReauthentication Maximum number of seconds the authenticated session remains valid without re-authentication. This enables Apache Kafka re-authentication feature, and causes sessions to expire when the access token expires. If the access token expires before max time or if max time is reached, the client has to re-authenticate, otherwise the server will drop the connection. Not set by default - the authenticated session does not expire when the access token expires. This option only applies to SASL_OAUTHBEARER authentication mechanism (when enableOauthBearer is true ). integer readTimeoutSeconds The read timeout in seconds when connecting to authorization server. If not set, the effective read timeout is 60 seconds. integer tlsTrustedCertificates Trusted certificates for TLS connection to the OAuth server. CertSecretSource array tokenEndpointUri URI of the Token Endpoint to use with SASL_PLAIN mechanism when the client authenticates with clientId and a secret . If set, the client can authenticate over SASL_PLAIN by either setting username to clientId , and setting password to client secret , or by setting username to account username, and password to access token prefixed with USDaccessToken: . If this option is not set, the password is always interpreted as an access token (without a prefix), and username as the account username (a so called 'no-client-credentials' mode). string type Must be oauth . string userInfoEndpointUri URI of the User Info Endpoint to use as a fallback to obtaining the user id when the Introspection Endpoint does not return information that can be used for the user id. string userNameClaim Name of the claim from the JWT authentication token, Introspection Endpoint response or User Info Endpoint response which will be used to extract the user id. Defaults to sub . string validIssuerUri URI of the token issuer used for authentication. string validTokenType Valid value for the token_type attribute returned by the Introspection Endpoint. No default value, and not checked by default. string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkalistenerauthenticationoauth-reference
7.182. rhn-client-tools
7.182. rhn-client-tools 7.182.1. RHBA-2015:1395 - rhn-client-tools bug fix update Updated rhn-client-tools packages that fix several bugs are now available for Red Hat Enterprise Linux 6. Red Hat Network Client Tools provide programs and libraries that allow a system to receive software updates from Red Hat Network. Bug Fixes BZ# 871028 When the rhnpush command was executed with the --server option, and the sslCACert variable was pointing to a non-existent path, rhnpush failed even when the connection to the server used the http protocol instead of https. With this update, rhnpush searches for CA certificate only when it is necessary, which prevents the described failure from occurring. BZ# 1003790 Previously, the rhn_check command returned an exception when processing a script that contained non-ascii characters. With this update, rhn_check accepts non-ascii characters as expected. BZ# 1036586 When executing the rhnpush command without any options, the command redundantly prompted for user credentials, and afterwards displayed a usage message about missing options. With this update, the command displays available options without asking for credentials. BZ# 1094776 Red Hat Network Client Tools did not calculate the CPU socket information on certain systems properly. With this update, rhn-client-tools parse the /proc/cpuinfo file correctly and thus provide the correct CPU socket information for all systems. BZ# 1147319 , BZ# 1147322 , BZ# 1147890 , BZ# 1147904 , BZ# 1147916 Several minor bugs have been fixed in various localizations of the Red Hat Network Client Tools GUI. BZ# 1147425 Previously, when running the "firstboot --reconfig" command on the system that was already registered with the Red Hat Subscription Management, the boot procedure failed on the Choose Service page. This bug has been fixed, and the exception no longer occurs on registered systems. Users of rhn-client-tools are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-rhn-client-tools
Chapter 12. Installing a cluster on AWS China
Chapter 12. Installing a cluster on AWS China In OpenShift Container Platform version 4.13, you can install a cluster to the following Amazon Web Services (AWS) China regions: cn-north-1 (Beijing) cn-northwest-1 (Ningxia) 12.1. Prerequisites You have an Internet Content Provider (ICP) license. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. 12.2. Installation requirements Red Hat does not publish a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for the AWS China regions. Before you can install the cluster, you must: Upload a custom RHCOS AMI. Manually create the installation configuration file ( install-config.yaml ). Specify the AWS region, and the accompanying custom AMI, in the installation configuration file. You cannot use the OpenShift Container Platform installation program to create the installation configuration file. The installer does not list an AWS region without native support for an RHCOS AMI. 12.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 12.4. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network. Note AWS China does not support a VPN connection between the VPC and your network. For more information about the Amazon VPC service in the Beijing and Ningxia regions, see Amazon Virtual Private Cloud in the AWS China documentation. 12.4.1. Private clusters in AWS To create a private cluster on Amazon Web Services (AWS), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for access from only the private network. The cluster still requires access to internet to access the AWS APIs. The following items are not required or created when you install a private cluster: Public subnets Public load balancers, which support public ingress A public Route 53 zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private Route 53 zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. 12.4.1.1. Limitations The ability to add public functionality to a private cluster is limited. You cannot make the Kubernetes API endpoints public after installation without taking additional actions, including creating public subnets in the VPC for each availability zone in use, creating a public load balancer, and configuring the control plane security groups to allow traffic from the internet on 6443 (Kubernetes API port). If you use a public Service type load balancer, you must tag a public subnet in each availability zone with kubernetes.io/cluster/<cluster-infra-id>: shared so that AWS can use them to create public load balancers. 12.5. About using a custom VPC In OpenShift Container Platform 4.13, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 12.5.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone field in the install-config.yaml file. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com.cn elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com.cn elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 12.5.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 12.5.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 12.5.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 12.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 12.7. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 1 The AWS profile name that holds your AWS credentials, like beijingadmin . Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 1 The AWS region, like cn-north-1 . Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 The RHCOS VMDK version, like 4.13.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 12.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 12.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have uploaded a custom RHCOS AMI. You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 12.9.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 12.9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 12.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 12.9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 12.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 12.9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 12.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 12.9.2. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{"auths": ...}' 24 1 12 14 17 24 Required. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 18 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 19 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 20 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 12.9.3. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 12.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 12.9.4. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 12.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 12.9.5. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 12.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 12.9.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 12.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 12.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 12.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 12.13. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 12.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. See About remote health monitoring for more information about the Telemetry service. 12.15. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - cn-north-1a - cn-north-1b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - cn-north-1a replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: cn-north-1 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 18 serviceEndpoints: 19 - name: ec2 url: https://vpce-id.ec2.cn-north-1.vpce.amazonaws.com.cn hostedZone: Z3URY6TWQ91KVV 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 publish: Internal 23 pullSecret: '{\"auths\": ...}' 24", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_aws/installing-aws-china-region
A.10. Shutting down Red Hat Enterprise Linux 6 Guests on a Red Hat Enterprise Linux 7 Host
A.10. Shutting down Red Hat Enterprise Linux 6 Guests on a Red Hat Enterprise Linux 7 Host Installing Red Hat Enterprise Linux 6 guest virtual machines with the Minimal installation option does not install the acpid (acpi daemon). Red Hat Enterprise Linux 7 no longer requires this package, as it has been taken over by systemd . However, Red Hat Enterprise Linux 6 guest virtual machines running on a Red Hat Enterprise Linux 7 host still require it. Without the acpid package, the Red Hat Enterprise Linux 6 guest virtual machine does not shut down when the virsh shutdown command is executed. The virsh shutdown command is designed to gracefully shut down guest virtual machines. Using the virsh shutdown command is easier and safer for system administration. Without graceful shut down with the virsh shutdown command a system administrator must log into a guest virtual machine manually or send the Ctrl - Alt - Del key combination to each guest virtual machine. Note Other virtualized operating systems may be affected by this issue. The virsh shutdown command requires that the guest virtual machine operating system is configured to handle ACPI shut down requests. Many operating systems require additional configurations on the guest virtual machine operating system to accept ACPI shut down requests. Procedure A.4. Workaround for Red Hat Enterprise Linux 6 guests Install the acpid package The acpid service listens and processes ACPI requests. Log into the guest virtual machine and install the acpid package on the guest virtual machine: Enable the acpid service on the guest Set the acpid service to start during the guest virtual machine boot sequence and start the service: Prepare guest domain XML Edit the domain XML file to include the following element. Replace the virtio serial port with org.qemu.guest_agent.0 and use your guest's name instead of the one shown. In this example, the guest is guest1. Remember to save the file. <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/guest1.agent'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Figure A.1. Guest XML replacement Install the QEMU guest agent Install the QEMU guest agent (QEMU-GA) and start the service as directed in the Red Hat Enterprise Linux 6 Virtualization Administration Guide . Shut down the guest List the known guest virtual machines so you can retrieve the name of the one you want to shutdown. Shut down the guest virtual machine. Wait a few seconds for the guest virtual machine to shut down. Verify it is shutdown. Start the guest virtual machine named guest1 , with the XML file you edited. Shut down the acpi in the guest1 guest virtual machine. List all the guest virtual machines again, guest1 should still be on the list, and it should indicate it is shut off. Start the guest virtual machine named guest1 , with the XML file you edited. Shut down the guest1 guest virtual machine guest agent. List the guest virtual machines. guest1 should still be on the list, and it should indicate it is shut off. The guest virtual machine will shut down using the virsh shutdown command for the consecutive shutdowns, without using the workaround described above. In addition to the method described above, a guest can be automatically shutdown, by stopping the libvirt-guests service. See Section A.11, "Optional Workaround to Allow for Graceful Shutdown" for more information on this method.
[ "yum install acpid", "chkconfig acpid on service acpid start", "<channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/guest1.agent'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>", "virsh list --all Id Name State ---------------------------------- 14 guest1 running", "virsh shutdown guest1 guest virtual machine guest1 is being shutdown", "virsh list --all Id Name State ---------------------------------- 14 guest1 shut off", "virsh start guest1", "virsh shutdown --mode acpi guest1", "virsh list --all Id Name State ---------------------------------- 14 guest1 shut off", "virsh start guest1", "virsh shutdown --mode agent guest1", "virsh list --all Id Name State ---------------------------------- guest1 shut off" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine-shutting_down_rednbsphat_enterprisenbsplinuxnbsp6_guests_on_a_rednbsphat_enterprisenbsplinuxnbsp7_host
Deploying Red Hat Hyperconverged Infrastructure for Virtualization on a single node
Deploying Red Hat Hyperconverged Infrastructure for Virtualization on a single node Red Hat Hyperconverged Infrastructure for Virtualization 1.8 Create a hyperconverged configuration with a single server Laura Bailey [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization_on_a_single_node/index
Chapter 7. Updating Logging
Chapter 7. Updating Logging There are two types of logging updates: minor release updates (5.y.z) and major release updates (5.y). 7.1. Minor release updates If you installed the logging Operators using the Automatic update approval option, your Operators receive minor version updates automatically. You do not need to complete any manual update steps. If you installed the logging Operators using the Manual update approval option, you must manually approve minor version updates. For more information, see Manually approving a pending Operator update . 7.2. Major release updates For major version updates you must complete some manual steps. For major release version compatibility and support information, see OpenShift Operator Life Cycles . 7.3. Upgrading the Red Hat OpenShift Logging Operator to watch all namespaces In logging 5.7 and older versions, the Red Hat OpenShift Logging Operator only watches the openshift-logging namespace. If you want the Red Hat OpenShift Logging Operator to watch all namespaces on your cluster, you must redeploy the Operator. You can complete the following procedure to redeploy the Operator without deleting your logging components. Prerequisites You have installed the OpenShift CLI ( oc ). You have administrator permissions. Procedure Delete the subscription by running the following command: USD oc -n openshift-logging delete subscription <subscription> Delete the Operator group by running the following command: USD oc -n openshift-logging delete operatorgroup <operator_group_name> Delete the cluster service version (CSV) by running the following command: USD oc delete clusterserviceversion cluster-logging.<version> Redeploy the Red Hat OpenShift Logging Operator by following the "Installing Logging" documentation. Verification Check that the targetNamespaces field in the OperatorGroup resource is not present or is set to an empty string. To do this, run the following command and inspect the output: USD oc get operatorgroup <operator_group_name> -o yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - "" # ... 7.4. Updating the Red Hat OpenShift Logging Operator To update the Red Hat OpenShift Logging Operator to a new major release version, you must modify the update channel for the Operator subscription. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have access to the OpenShift Container Platform web console and are viewing the Administrator perspective. Procedure Navigate to Operators Installed Operators . Select the openshift-logging project. Click the Red Hat OpenShift Logging Operator. Click Subscription . In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.9 , depending on your current update channel. In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.9 , and click Save . Note the cluster-logging.v5.9.<z> version. Wait for a few seconds, and then go to Operators Installed Operators to verify that the Red Hat OpenShift Logging Operator version matches the latest cluster-logging.v5.9.<z> version. On the Operators Installed Operators page, wait for the Status field to report Succeeded . Check if the LokiStack custom resource contains the v13 schema version and add it if it is missing. For correctly adding the v13 schema version, see "Upgrading the LokiStack storage schema". 7.5. Updating the Loki Operator To update the Loki Operator to a new major release version, you must modify the update channel for the Operator subscription. Prerequisites You have installed the Loki Operator. You have administrator permissions. You have access to the OpenShift Container Platform web console and are viewing the Administrator perspective. Procedure Navigate to Operators Installed Operators . Select the openshift-operators-redhat project. Click the Loki Operator . Click Subscription . In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.y , depending on your current update channel. In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.y , and click Save . Note the loki-operator.v5.y.z version. Wait for a few seconds, then click Operators Installed Operators . Verify that the Loki Operator version matches the latest loki-operator.v5.y.z version. On the Operators Installed Operators page, wait for the Status field to report Succeeded . Check if the LokiStack custom resource contains the v13 schema version and add it if it is missing. For correctly adding the v13 schema version, see "Upgrading the LokiStack storage schema". 7.6. Upgrading the LokiStack storage schema If you are using the Red Hat OpenShift Logging Operator with the Loki Operator, the Red Hat OpenShift Logging Operator 5.9 or later supports the v13 schema version in the LokiStack custom resource. Upgrading to the v13 schema version is recommended because it is the schema version to be supported going forward. Procedure Add the v13 schema version in the LokiStack custom resource as follows: apiVersion: loki.grafana.com/v1 kind: LokiStack # ... spec: # ... storage: schemas: # ... version: v12 1 - effectiveDate: "<yyyy>-<mm>-<future_dd>" 2 version: v13 # ... 1 Do not delete. Data persists in its original schema version. Keep the schema versions to avoid data loss. 2 Set a future date that has not yet started in the Coordinated Universal Time (UTC) time zone. Tip To edit the LokiStack custom resource, you can run the oc edit command: USD oc edit lokistack <name> -n openshift-logging Verification On or after the specified effectiveDate date, check that there is no LokistackSchemaUpgradesRequired alert in the web console in Administrator Observe Alerting . 7.7. Updating the OpenShift Elasticsearch Operator To update the OpenShift Elasticsearch Operator to the current version, you must modify the subscription. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Prerequisites If you are using Elasticsearch as the default log store, and Kibana as the UI, update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift Logging Operator. Important If you update the Operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To fix this issue, delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again. The Logging status is healthy: All pods have a ready status. The Elasticsearch cluster is healthy. Your Elasticsearch and Kibana data is backed up . You have administrator permissions. You have installed the OpenShift CLI ( oc ) for the verification steps. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Select the openshift-operators-redhat project. Click OpenShift Elasticsearch Operator . Click Subscription Channel . In the Change Subscription Update Channel window, select stable-5.y and click Save . Note the elasticsearch-operator.v5.y.z version. Wait for a few seconds, then click Operators Installed Operators . Verify that the OpenShift Elasticsearch Operator version matches the latest elasticsearch-operator.v5.y.z version. On the Operators Installed Operators page, wait for the Status field to report Succeeded . Verification Verify that all Elasticsearch pods have a Ready status by entering the following command and observing the output: USD oc get pod -n openshift-logging --selector component=elasticsearch Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m Verify that the Elasticsearch cluster status is green by entering the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health Example output { "cluster_name" : "elasticsearch", "status" : "green", } Verify that the Elasticsearch cron jobs are created by entering the following commands and observing the output: USD oc project openshift-logging USD oc get cronjob Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s Verify that the log store is updated to the correct version and the indices are green by entering the following command and observing the output: USD oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices Verify that the output includes the app-00000x , infra-00000x , audit-00000x , .security indices: Example 7.1. Sample output with indices in a green status Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0 Verify that the log visualizer is updated to the correct version by entering the following command and observing the output: USD oc get kibana kibana -o json Verify that the output includes a Kibana pod with the ready status: Example 7.2. Sample output with a ready Kibana pod [ { "clusterCondition": { "kibana-5fdd766ffd-nb2jj": [ { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" }, { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" } ] }, "deployment": "kibana", "pods": { "failed": [], "notReady": [] "ready": [] }, "replicaSets": [ "kibana-5fdd766ffd" ], "replicas": 1 } ]
[ "oc -n openshift-logging delete subscription <subscription>", "oc -n openshift-logging delete operatorgroup <operator_group_name>", "oc delete clusterserviceversion cluster-logging.<version>", "oc get operatorgroup <operator_group_name> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - \"\"", "apiVersion: loki.grafana.com/v1 kind: LokiStack spec: storage: schemas: # version: v12 1 - effectiveDate: \"<yyyy>-<mm>-<future_dd>\" 2 version: v13", "oc edit lokistack <name> -n openshift-logging", "oc get pod -n openshift-logging --selector component=elasticsearch", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m", "oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health", "{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }", "oc project openshift-logging", "oc get cronjob", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s", "oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices", "Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0", "oc get kibana kibana -o json", "[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/logging/cluster-logging-upgrading
Chapter 8. Triggering updates on image stream changes
Chapter 8. Triggering updates on image stream changes When an image stream tag is updated to point to a new image, OpenShift Container Platform can automatically take action to roll the new image out to resources that were using the old image. You configure this behavior in different ways depending on the type of resource that references the image stream tag. 8.1. OpenShift Container Platform resources OpenShift Container Platform deployment configurations and build configurations can be automatically triggered by changes to image stream tags. The triggered action can be run using the new value of the image referenced by the updated image stream tag. 8.2. Triggering Kubernetes resources Kubernetes resources do not have fields for triggering, unlike deployment and build configurations, which include as part of their API definition a set of fields for controlling triggers. Instead, you can use annotations in OpenShift Container Platform to request triggering. The annotation is defined as follows: Key: image.openshift.io/triggers Value: [ { "from": { "kind": "ImageStreamTag", 1 "name": "example:latest", 2 "namespace": "myapp" 3 }, "fieldPath": "spec.template.spec.containers[?(@.name==\"web\")].image", 4 "paused": false 5 }, ... ] 1 Required: kind is the resource to trigger from must be ImageStreamTag . 2 Required: name must be the name of an image stream tag. 3 Optional: namespace defaults to the namespace of the object. 4 Required: fieldPath is the JSON path to change. This field is limited and accepts only a JSON path expression that precisely matches a container by ID or index. For pods, the JSON path is "spec.containers[?(@.name='web')].image". 5 Optional: paused is whether or not the trigger is paused, and the default value is false . Set paused to true to temporarily disable this trigger. When one of the core Kubernetes resources contains both a pod template and this annotation, OpenShift Container Platform attempts to update the object by using the image currently associated with the image stream tag that is referenced by trigger. The update is performed against the fieldPath specified. Examples of core Kubernetes resources that can contain both a pod template and annotation include: CronJobs Deployments StatefulSets DaemonSets Jobs ReplicationControllers Pods 8.3. Setting the image trigger on Kubernetes resources When adding an image trigger to deployments, you can use the oc set triggers command. For example, the sample command in this procedure adds an image change trigger to the deployment named example so that when the example:latest image stream tag is updated, the web container inside the deployment updates with the new image value. This command sets the correct image.openshift.io/triggers annotation on the deployment resource. Procedure Trigger Kubernetes resources by entering the oc set triggers command: USD oc set triggers deploy/example --from-image=example:latest -c web Unless the deployment is paused, this pod template update automatically causes a deployment to occur with the new image value.
[ "Key: image.openshift.io/triggers Value: [ { \"from\": { \"kind\": \"ImageStreamTag\", 1 \"name\": \"example:latest\", 2 \"namespace\": \"myapp\" 3 }, \"fieldPath\": \"spec.template.spec.containers[?(@.name==\\\"web\\\")].image\", 4 \"paused\": false 5 }, ]", "oc set triggers deploy/example --from-image=example:latest -c web" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/images/triggering-updates-on-imagestream-changes
Chapter 3. Important update on odo
Chapter 3. Important update on odo Red Hat does not provide information about odo on the OpenShift Container Platform documentation site. See the documentation maintained by Red Hat and the upstream community for documentation information related to odo . Important For the materials maintained by the upstream community, Red Hat provides support under Cooperative Community Support .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/cli_tools/developer-cli-odo
6.4.3. Related Books
6.4.3. Related Books The following books discuss various issues related to account and resource management, and are good resources for Red Hat Enterprise Linux system administrators. The Security Guide ; Red Hat, Inc - Provides an overview of the security-related aspects of user accounts, namely choosing strong passwords. The Reference Guide ; Red Hat, Inc - Contains detailed information on the users and groups present in Red Hat Enterprise Linux. The System Administrators Guide ; Red Hat, Inc - Includes a chapter on user and group configuration. Linux Administration Handbook by Evi Nemeth, Garth Snyder, and Trent R. Hein; Prentice Hall - Provides a chapter on user account maintenance, a section on security as it relates to user account files, and a section on file attributes and permissions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-acctsgrps-addres-books
Chapter 2. Installing and configuring the logs service
Chapter 2. Installing and configuring the logs service Red Hat OpenStack Platform (RHOSP) writes informational messages to specific log files; you can use these messages for troubleshooting and monitoring system events. The log collection agent Rsyslog collects logs on the client side and sends these logs to an instance of Rsyslog that is running on the server side. The server-side Rsyslog instance redirects log records to Elasticsearch for storage. Note You do not need to attach the individual log files to your support cases manually. The sosreport utility gathers the required logs automatically. 2.1. The centralized log system architecture and components Monitoring tools use a client-server model with the client deployed onto the Red Hat OpenStack Platform (RHOSP) overcloud nodes. The Rsyslog service provides client-side centralized logging (CL). All RHOSP services generate and update log files. These log files record actions, errors, warnings, and other events. In a distributed environment like OpenStack, collecting these logs in a central location simplifies debugging and administration. With centralized logging, there is one central place to view logs across your entire RHOSP environment. These logs come from the operating system, such as syslog and audit log files, infrastructure components, such as RabbitMQ and MariaDB, and OpenStack services such as Identity, Compute, and others. The centralized logging toolchain consists of the following components: Log Collection Agent (Rsyslog) Data Store (ElasticSearch) API/Presentation Layer (Grafana) Note Red Hat OpenStack Platform director does not deploy the server-side components for centralized logging. Red Hat does not support the server-side components, including the Elasticsearch database and Grafana. 2.2. Enabling centralized logging with Elasticsearch To enable centralized logging, you must specify the implementation of the OS::TripleO::Services::Rsyslog composable service. Note The Rsyslog service uses only Elasticsearch as a data store for centralized logging. Prerequisites Elasticsearch is installed on the server side. Procedure Add the file path of the logging environment file to the overcloud deployment command with any other environment files that are relevant to your environment and deploy, as shown in the following example: Replace <existing_overcloud_environment_files> with the list of environment files that are part of your existing deployment. 2.3. Configuring logging features To configure logging features, modify the RsyslogElasticsearchSetting parameter in the logging-environment-rsyslog.yaml file. Procedure Copy the tripleo-heat-templates/environments/logging-environment-rsyslog.yaml file to your home directory. Create entries in the RsyslogElasticsearchSetting parameter to suit your environment. The following snippet is an example configuration of the RsyslogElasticsearchSetting parameter: Additional resources For more information about the configurable parameters, see Section 2.3.1, "Configurable logging parameters" . 2.3.1. Configurable logging parameters This table contains descriptions of logging parameters that you use to configure logging features in Red Hat OpenStack Platform (RHOSP). You can find these parameters in the tripleo-heat-templates/deployment/logging/rsyslog-container-puppet.yaml file. Table 2.1. Configurable logging parameters Parameter Description RsyslogElasticsearchSetting Configuration for rsyslog-elasticsearch plugin. For more information, see https://www.rsyslog.com/doc/v8-stable/configuration/modules/omelasticsearch.html . RsyslogElasticsearchTlsCACert Contains the content of the CA cert for the CA that issued the Elasticsearch server cert. RsyslogElasticsearchTlsClientCert Contains the content of the client cert for doing client cert authorization against Elasticsearch. RsyslogElasticsearchTlsClientKey Contains the content of the private key corresponding to the cert RsyslogElasticsearchTlsClientCert . 2.4. Overriding the default path for a log file If you modify the default containers and the modification includes the path to the service log file, you must also modify the default log file path. Every composable service has a <service_name>LoggingSource parameter. For example, for the nova-compute service, the parameter is NovaComputeLoggingSource . Procedure To override the default path for the nova-compute service, add the path to the NovaComputeLoggingSource parameter in your configuration file: Note For each service, define the tag and file . Other values are derived by default. You can modify the format for a specific service. This passes directly to the Rsyslog configuration. The default format for the LoggingDefaultFormat parameter is /(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}.\d+) (?<pid>\d+) (?<priority>\S+) (?<message>.*)USD/ Use the following syntax: The following snippet is an example of a more complex transformation: 2.5. Modifying the format of a log record You can modify the format of the start of the log record for a specific service. This passes directly to the Rsyslog configuration. The default format for the Red Hat OpenStack Platform (RHOSP) log record is ('^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}(.[0-9]+ [0-9]+)? (DEBUG|INFO|WARNING|ERROR) '). Procedure To add a different regular expression for parsing the start of log records, add startmsg.regex to the configuration: 2.6. Testing the connection between Rsyslog and Elasticsearch On the client side, you can verify communication between Rsyslog and Elasticsearch. Procedure Navigate to the Elasticsearch connection log file, /var/log/rsyslog/omelasticsearch.log in the Rsyslog container or /var/log/containers/rsyslog/omelasticsearch.log on the host. If this log file does not exist or if the log file exists but does not contain logs, there is no connection problem. If the log file is present and contains logs, Rsyslog has not connected successfully. Note To test the connection from the server side, view the Elasticsearch logs for connection issues. 2.7. Server-side logging If you have an Elasticsearch cluster running, you must configure the RsyslogElasticsearchSetting parameter in the logging-environment-rsyslog.yaml file to connect Rsyslog that is running on overcloud nodes. To configure the RsyslogElasticsearchSetting parameter, see https://www.rsyslog.com/doc/v8-stable/configuration/modules/omelasticsearch.html 2.8. Tracebacks When you encounter an issue and you start troubleshooting, you can use a traceback log to diagnose the issue. In log files, tracebacks usually have several lines of information, all relating to the same issue. Rsyslog provides a regular expression to define how a log record starts. Each log record usually starts with a timestamp and the first line of the traceback is the only line that contains this information. Rsyslog bundles the indented records with the first line and sends them as one log record. For that behaviour configuration option startmsg.regex in <Service>LoggingSource is used. The following regular expression is the default value for all <service>LoggingSource parameters in director: When this default does not match log records of your added or modified LoggingSource , you must change startmsg.regex accordingly. 2.9. Location of log files for OpenStack services Each OpenStack component has a separate logging directory containing files specific to a running service. 2.9.1. Bare Metal Provisioning (ironic) log files Service Service name Log path OpenStack Ironic API openstack-ironic-api.service /var/log/containers/ironic/ironic-api.log OpenStack Ironic Conductor openstack-ironic-conductor.service /var/log/containers/ironic/ironic-conductor.log 2.9.2. Block Storage (cinder) log files Service Service name Log path Block Storage API openstack-cinder-api.service /var/log/containers/cinder-api.log Block Storage Backup openstack-cinder-backup.service /var/log/containers/cinder/backup.log Informational messages The cinder-manage command /var/log/containers/cinder/cinder-manage.log Block Storage Scheduler openstack-cinder-scheduler.service /var/log/containers/cinder/scheduler.log Block Storage Volume openstack-cinder-volume.service /var/log/containers/cinder/volume.log 2.9.3. Compute (nova) log files Service Service name Log path OpenStack Compute API service openstack-nova-api.service /var/log/containers/nova/nova-api.log OpenStack Compute certificate server openstack-nova-cert.service /var/log/containers/nova/nova-cert.log OpenStack Compute service openstack-nova-compute.service /var/log/containers/nova/nova-compute.log OpenStack Compute Conductor service openstack-nova-conductor.service /var/log/containers/nova/nova-conductor.log OpenStack Compute VNC console authentication server openstack-nova-consoleauth.service /var/log/containers/nova/nova-consoleauth.log Informational messages nova-manage command /var/log/containers/nova/nova-manage.log OpenStack Compute NoVNC Proxy service openstack-nova-novncproxy.service /var/log/containers/nova/nova-novncproxy.log OpenStack Compute Scheduler service openstack-nova-scheduler.service /var/log/containers/nova/nova-scheduler.log 2.9.4. Dashboard (horizon) log files Service Service name Log path Log of certain user interactions Dashboard interface /var/log/containers/horizon/horizon.log The Apache HTTP server uses several additional log files for the Dashboard web interface, which you can access by using a web browser or command-line client, for example, keystone and nova. The log files in the following table can be helpful in tracking the use of the Dashboard and diagnosing faults: Purpose Log path All processed HTTP requests /var/log/containers/httpd/horizon_access.log HTTP errors /var/log/containers/httpd/horizon_error.log Admin-role API requests /var/log/containers/httpd/keystone_wsgi_admin_access.log Admin-role API errors /var/log/containers/httpd/keystone_wsgi_admin_error.log Member-role API requests /var/log/containers/httpd/keystone_wsgi_main_access.log Member-role API errors /var/log/containers/httpd/keystone_wsgi_main_error.log Note There is also /var/log/containers/httpd/default_error.log , which stores errors reported by other web services that are running on the same host. 2.9.5. Identity Service (keystone) log files Service Service name Log Path OpenStack Identity Service openstack-keystone.service /var/log/containers/keystone/keystone.log 2.9.6. Image Service (glance) log files Service Service name Log path OpenStack Image Service API server openstack-glance-api.service /var/log/containers/glance/api.log OpenStack Image Service Registry server openstack-glance-registry.service /var/log/containers/glance/registry.log 2.9.7. Networking (neutron) log files Service Service name Log path OpenStack Neutron DHCP Agent neutron-dhcp-agent.service /var/log/containers/neutron/dhcp-agent.log OpenStack Networking Layer 3 Agent neutron-l3-agent.service /var/log/containers/neutron/l3-agent.log Metadata agent service neutron-metadata-agent.service /var/log/containers/neutron/metadata-agent.log Metadata namespace proxy n/a /var/log/containers/neutron/neutron-ns-metadata-proxy- UUID .log Open vSwitch agent neutron-openvswitch-agent.service /var/log/containers/neutron/openvswitch-agent.log OpenStack Networking service neutron-server.service /var/log/containers/neutron/server.log 2.9.8. Object Storage (swift) log files OpenStack Object Storage sends logs to the system logging facility only. Note By default, all Object Storage log files go to /var/log/containers/swift/swift.log , using the local0, local1, and local2 syslog facilities. The log messages of Object Storage are classified into two broad categories: those by REST API services and those by background daemons. The API service messages contain one line per API request, in a manner similar to popular HTTP servers; both the frontend (Proxy) and backend (Account, Container, Object) services post such messages. The daemon messages are less structured and typically contain human-readable information about daemons performing their periodic tasks. However, regardless of which part of Object Storage produces the message, the source identity is always at the beginning of the line. Here is an example of a proxy message: Here is an example of ad-hoc messages from background daemons: 2.9.9. Orchestration (heat) log files Service Service name Log path OpenStack Heat API Service openstack-heat-api.service /var/log/containers/heat/heat-api.log OpenStack Heat Engine Service openstack-heat-engine.service /var/log/containers/heat/heat-engine.log Orchestration service events n/a /var/log/containers/heat/heat-manage.log 2.9.10. Shared Filesystem Service (manila) log files Service Service name Log path OpenStack Manila API Server openstack-manila-api.service /var/log/containers/manila/api.log OpenStack Manila Scheduler openstack-manila-scheduler.service /var/log/containers/manila/scheduler.log OpenStack Manila Share Service openstack-manila-share.service /var/log/containers/manila/share.log Note Some information from the Manila Python library can also be logged in /var/log/containers/manila/manila-manage.log . 2.9.11. Telemetry (ceilometer) log files Service Service name Log path OpenStack ceilometer notification agent ceilometer_agent_notification /var/log/containers/ceilometer/agent-notification.log OpenStack ceilometer central agent ceilometer_agent_central /var/log/containers/ceilometer/central.log OpenStack ceilometer collection openstack-ceilometer-collector.service /var/log/containers/ceilometer/collector.log OpenStack ceilometer compute agent ceilometer_agent_compute /var/log/containers/ceilometer/compute.log 2.9.12. Log files for supporting services The following services are used by the core OpenStack components and have their own log directories and files. Service Service name Log path Message broker (RabbitMQ) rabbitmq-server.service /var/log/rabbitmq/rabbit@ short_hostname .log /var/log/rabbitmq/rabbit@ short_hostname -sasl.log (for Simple Authentication and Security Layer related log messages) Database server (MariaDB) mariadb.service /var/log/mariadb/mariadb.log Virtual network switch (Open vSwitch) openvswitch-nonetwork.service /var/log/openvswitch/ovsdb-server.log /var/log/openvswitch/ovs-vswitchd.log 2.9.13. aodh (alarming service) log files Service Container name Log path Alarming API aodh_api /var/log/containers/httpd/aodh-api/aodh_wsgi_access.log Alarm evaluator log aodh_evaluator /var/log/containers/aodh/aodh-evaluator.log Alarm listener aodh_listener /var/log/containers/aodh/aodh-listener.log Alarm notification aodh_notifier /var/log/containers/aodh/aodh-notifier.log 2.9.14. gnocchi (metric storage) log files Service Container name Log path Gnocchi API gnocchi_api /var/log/containers/httpd/gnocchi-api/gnocchi_wsgi_access.log Gnocchi metricd gnocchi_metricd /var/log/containers/gnocchi/gnocchi-metricd.log Gnocchi statsd gnocchi_statsd /var/log/containers/gnocchi/gnocchi-statsd.log
[ "openstack overcloud deploy <existing_overcloud_environment_files> -e /usr/share/openstack-tripleo-heat-templates/environments/logging-environment-rsyslog.yaml", "parameter_defaults: RsyslogElasticsearchSetting: uid: \"elastic\" pwd: \"yourownpassword\" skipverifyhost: \"on\" allowunsignedcerts: \"on\" server: \"https://log-store-service-telemetry.apps.stfcloudops1.lab.upshift.rdu2.redhat.com\" serverport: 443", "NovaComputeLoggingSource: tag: openstack.nova.compute file: /some/other/path/nova-compute.log", "<service_name>LoggingSource: tag: <service_name>.tag path: <service_name>.path format: <service_name>.format", "ServiceLoggingSource: tag: openstack.Service path: /var/log/containers/service/service.log format: multiline format_firstline: '/^\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2}.\\d{3} \\d+ \\S+ \\S+ \\[(req-\\S+ \\S+ \\S+ \\S+ \\S+ \\S+|-)\\]/' format1: '/^(?<Timestamp>\\S+ \\S+) (?<Pid>\\d+) (?<log_level>\\S+) (?<python_module>\\S+) (\\[(req-(?<request_id>\\S+) (?<user_id>\\S+) (?<tenant_id>\\S+) (?<domain_id>\\S+) (?<user_domain>\\S+) (?<project_domain>\\S+)|-)\\])? (?<Payload>.*)?USD/'", "NovaComputeLoggingSource: tag: openstack.nova.compute file: /some/other/path/nova-compute.log startmsg.regex: \"^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}(.[0-9]+ \\\\+[0-9]+)? [A-Z]+ \\\\([a-z]+\\\\)", "startmsg.regex='^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}(.[0-9]+ [0-9]+)? (DEBUG|INFO|WARNING|ERROR) '", "Apr 20 15:20:34 rhev-a24c-01 proxy-server: 127.0.0.1 127.0.0.1 20/Apr/2015/19/20/34 GET /v1/AUTH_zaitcev%3Fformat%3Djson%26marker%3Dtestcont HTTP/1.0 200 - python-swiftclient-2.1.0 AUTH_tk737d6... - 2 - txc454fa8ea4844d909820a-0055355182 - 0.0162 - - 1429557634.806570053 1429557634.822791100", "Apr 27 17:08:15 rhev-a24c-02 object-auditor: Object audit (ZBF). Since Mon Apr 27 21:08:15 2015: Locally: 1 passed, 0 quarantined, 0 errors files/sec: 4.34 , bytes/sec: 0.00, Total time: 0.23, Auditing time: 0.00, Rate: 0.00 Apr 27 17:08:16 rhev-a24c-02 object-auditor: Object audit (ZBF) \"forever\" mode completed: 0.56s. Total quarantined: 0, Total errors: 0, Total files/sec: 14.31, Total bytes/sec: 0.00, Auditing time: 0.02, Rate: 0.04 Apr 27 17:08:16 rhev-a24c-02 account-replicator: Beginning replication run Apr 27 17:08:16 rhev-a24c-02 account-replicator: Replication run OVER Apr 27 17:08:16 rhev-a24c-02 account-replicator: Attempted to replicate 5 dbs in 0.12589 seconds (39.71876/s) Apr 27 17:08:16 rhev-a24c-02 account-replicator: Removed 0 dbs Apr 27 17:08:16 rhev-a24c-02 account-replicator: 10 successes, 0 failures" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/logging_monitoring_and_troubleshooting_guide/installing-and-configuring-the-logs-service_osp
About
About Red Hat OpenShift Service Mesh 3.0.0tp1 About OpenShift Service Mesh Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.0.0tp1/html/about/index
5.3. Performing Minimal PCP Setup to Gather File System Data
5.3. Performing Minimal PCP Setup to Gather File System Data The following procedure provides instructions on how to install a minimal PCP setup to collect statistics on Red Hat Enterprise Linux. The minimal setup involves adding the minimum number of packages on a production system needed to gather data for further analysis. The resulting tar.gz archive of the pmlogger output can be analyzed by using various PCP tools, such as PCP Charts, and compared with other sources of performance information. Install the pcp package: Start the pmcd service: Run the pmlogconf utility to update the pmlogger configuration and enable the XFS information, XFS data, and log I/O traffic groups: Start the pmlogger service: Perform operations on the XFS file system. Stop the pmlogger service: Collect the output and save it to a tar.gz file named based on the hostname and the current date and time:
[ "yum install pcp", "systemctl start pmcd.service", "pmlogconf -r /var/lib/pcp/config/pmlogger/config.default", "systemctl start pmlogger.service", "systemctl stop pmcd.service", "systemctl stop pmlogger.service", "cd /var/log/pcp/pmlogger/", "tar -czf USD(hostname).USD(date +%F-%Hh%M).pcp.tar.gz USD(hostname)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sec-minimal-pcp-setup-on-red-hat-enterprise-linux